HPX 0.9.12

The STE||AR Group

Distributed under the Boost Software License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)


Table of Contents

Preface
What's New
HPX V0.9.12
Previous HPX Releases
HPX V0.9.11 (Nov 11, 2015)
HPX V0.9.10 (Mar 24, 2015)
HPX V0.9.9 (Oct 31, 2014, codename Spooky)
HPX V0.9.8 (Mar 24, 2014)
HPX V0.9.7 (Nov 13, 2013)
HPX V0.9.6 (Jul 30, 2013)
HPX V0.9.5 (Jan 16, 2013)
HPX V0.9.0 (Jul 5, 2012)
HPX V0.8.1 (Apr 21, 2012)
HPX V0.8.0 (Mar 23, 2012)
HPX V0.7.0 (Dec 12, 2011)
Tutorial
Getting Started
How to Use HPX Applications with PBS
How to Use HPX Applications with SLURM
Introduction
What makes our Systems Slow?
Technology Demands New Response
Governing Principles applied while Developing HPX
Examples
Fibonacci
Hello World
Accumulator
Interest Calculator
Futurization Example
Manual
The HPX Build System
CMake Basics
Build Prerequisites
Installing Boost Libraries
Building HPX
CMake Variables used to configure HPX
CMake Toolchains shipped with HPX
Build recipes
Setting up the HPX Documentation Tool Chain
Building Projects using HPX
Using HPX with pkg-config
Using HPX with CMake based projects
Testing HPX
Running tests manually
Issue Tracker
Buildbot
Launching HPX
Configure HPX Applications
The HPX INI File Format
Built-in Default Configuration Settings
Loading INI Files
Loading Components
Logging
HPX Command Line Options
More Details about HPX Command Line Options
HPX System Components
The HPX I/O-streams Component
Writing HPX applications
Global Names
Applying Actions
Action Type Definition
Action Invocation
Applying an Action Asynchronously without any Synchronization
Applying an Action Asynchronously with Synchronization
Applying an Action Synchronously
Applying an Action with a Continuation but without any Synchronization
Applying an Action with a Continuation and with Synchronization
Action Error Handling
Writing Components
Defining Components
Defining Client Side Representation Classes
Creating Component Instances
Using Component Instances
Using LCOs
Extended Facilities for Futures
High Level Parallel Facilities
Using Parallel Algorithms
Executors and Executor Traits
Executor Parameters and Executor Parameter Traits
Using Task Blocks
Extensions for Task Blocks
Error Handling
Performance Counters
Performance Counter Names
Consuming Performance Counter Data
Consuming Performance Counter Data from the Command Line
Consuming Performance Counter Data using the HPX API
Providing Performance Counter Data
Exposing Performance Counter Data using a Simple Function
Implementing a Full Performance Counter
Existing HPX Performance Counters
HPX Thread Scheduling Policies
Index
Reference
Header <hpx/components/component_storage/migrate_from_storage.hpp>
Function template migrate_from_storage
Header <hpx/components/component_storage/migrate_to_storage.hpp>
Function template migrate_to_storage
Function template migrate_to_storage
Header <hpx/error.hpp>
Type error — Possible error conditions.
Header <hpx/exception.hpp>
Class error_codeA hpx::error_code represents an arbitrary error condition.
Class exceptionA hpx::exception is the main exception type used by HPX to report errors.
Struct thread_interruptedA hpx::thread_interrupted is the exception type used by HPX to interrupt a running HPX thread.
Function diagnostic_information — Extract the diagnostic information embedded in the given exception and return a string holding a formatted message.
Function diagnostic_information — Extract the diagnostic information embedded in the given exception and return a string holding a formatted message.
Function get_error_what — Return the error message of the thrown exception.
Function get_error_what — Return the locality id where the exception was thrown.
Function get_error_locality_id — Return the locality id where the exception was thrown.
Function get_error_locality_id — Return the locality id where the exception was thrown.
Function get_error — Return the locality id where the exception was thrown.
Function get_error — Return the locality id where the exception was thrown.
Function get_error_host_name — Return the hostname of the locality where the exception was thrown.
Function get_error_host_name — Return the hostname of the locality where the exception was thrown.
Function get_error_process_id — Return the (operating system) process id of the locality where the exception was thrown.
Function get_error_process_id — Return the (operating system) process id of the locality where the exception was thrown.
Function get_error_env — Return the environment of the OS-process at the point the exception was thrown.
Function get_error_env — Return the environment of the OS-process at the point the exception was thrown.
Function get_error_function_name — Return the function name from which the exception was thrown.
Function get_error_function_name — Return the function name from which the exception was thrown.
Function get_error_backtrace — Return the stack backtrace from the point the exception was thrown.
Function get_error_backtrace — Return the stack backtrace from the point the exception was thrown.
Function get_error_file_name — Return the (source code) file name of the function from which the exception was thrown.
Function get_error_file_name — Return the (source code) file name of the function from which the exception was thrown.
Function get_error_line_number — Return the line number in the (source code) file of the function from which the exception was thrown.
Function get_error_line_number — Return the line number in the (source code) file of the function from which the exception was thrown.
Function get_error_os_thread — Return the sequence number of the OS-thread used to execute HPX-threads from which the exception was thrown.
Function get_error_os_thread — Return the sequence number of the OS-thread used to execute HPX-threads from which the exception was thrown.
Function get_error_thread_id — Return the unique thread id of the HPX-thread from which the exception was thrown.
Function get_error_thread_id — Return the unique thread id of the HPX-thread from which the exception was thrown.
Function get_error_thread_description — Return any additionally available thread description of the HPX-thread from which the exception was thrown.
Function get_error_thread_description — Return any additionally available thread description of the HPX-thread from which the exception was thrown.
Function get_error_config — Return the HPX configuration information point from which the exception was thrown.
Function get_error_config — Return the HPX configuration information point from which the exception was thrown.
Function get_error_state — Return the HPX runtime state information at which the exception was thrown.
Function get_error_state — Return the HPX runtime state information at which the exception was thrown.
Macro HPX_THROW_EXCEPTIONThrow a hpx::exception initialized from the given parameters.
Macro HPX_THROWS_IFEither throw a hpx::exception or initialize hpx::error_code from the given parameters.
Header <hpx/exception_fwd.hpp>
Global throwsPredefined error_code object used as "throw on error" tag.
Header <hpx/exception_list.hpp>
Class exception_list
Header <hpx/hpx_finalize.hpp>
Function finalize — Main function to gracefully terminate the the HPX runtime system.
Function finalize — Main function to gracefully terminate the the HPX runtime system.
Function terminate — Terminate any application non-gracefully.
Function disconnect — Disconnect this locality from the application.
Function disconnect — Disconnect this locality from the application.
Function stop — Stop the runtime system.
Header <hpx/hpx_fwd.hpp>
Type definition startup_function_type
Type definition shutdown_function_type
Function find_root_locality — Return the global id representing the root locality.
Function find_all_localities — Return the list of global ids representing all localities available to this application.
Function find_all_localities — Return the list of global ids representing all localities available to this application which support the given component type.
Function find_remote_localities — Return the list of locality ids of remote localities supporting the given component type. By default this function will return the list of all remote localities (all but the current locality).
Function find_remote_localities — Return the list of locality ids of remote localities supporting the given component type. By default this function will return the list of all remote localities (all but the current locality).
Function find_locality — Return the global id representing an arbitrary locality which supports the given component type.
Function get_num_localities_sync — Return the number of localities which are currently registered for the running application.
Function get_initial_num_localities — Return the number of localities which were registered at startup for the running application.
Function get_num_localities — Asynchronously return the number of localities which are currently registered for the running application.
Function get_num_localities_sync — Return the number of localities which are currently registered for the running application.
Function get_num_localities — Asynchronously return the number of localities which are currently registered for the running application.
Function register_pre_startup_function — Add a function to be executed by a HPX thread before hpx_main but guaranteed before any startup function is executed (system-wide).
Function register_startup_function — Add a function to be executed by a HPX thread before hpx_main but guaranteed after any pre-startup function is executed (system-wide).
Function register_pre_shutdown_function — Add a function to be executed by a HPX thread during hpx::finalize() but guaranteed before any shutdown function is executed (system-wide)
Function register_shutdown_function — Add a function to be executed by a HPX thread during hpx::finalize() but guaranteed after any pre-shutdown function is executed (system-wide)
Function is_starting — Test whether the runtime system is currently being started.
Function is_running — Test whether the runtime system is currently running.
Function is_stopped — Test whether the runtime system is currently stopped.
Function is_stopped_or_shutting_down — Test whether the runtime system is currently being shut down.
Function get_thread_name — Return the name of the calling thread.
Function get_num_worker_threads — Return the number of worker OS- threads used to execute HPX threads.
Function get_system_uptime — Return the system uptime measure on the thread executing this call.
Function get_colocation_id_sync — Return the id of the locality where the object referenced by the given id is currently located on.
Function get_colocation_id — Asynchronously return the id of the locality where the object referenced by the given id is currently located on.
Function start_active_counters — Start all active performance counters, optionally naming the section of code.
Function reset_active_counters — Resets all active performance counters.
Function stop_active_counters — Stop all active performance counters.
Function evaluate_active_counters — Evaluate and output all active performance counters, optionally naming the point in code marked by this function.
Function create_message_handler — Create an instance of a message handler plugin.
Function create_binary_filter — Create an instance of a binary filter plugin.
Header <hpx/hpx_init.hpp>
Function init — Main entry point for launching the HPX runtime system.
Function init — Main entry point for launching the HPX runtime system.
Function init — Main entry point for launching the HPX runtime system.
Function init — Main entry point for launching the HPX runtime system.
Function init — Main entry point for launching the HPX runtime system.
Function init — Main entry point for launching the HPX runtime system.
Function init — Main entry point for launching the HPX runtime system.
Function init — Main entry point for launching the HPX runtime system.
Function init — Main entry point for launching the HPX runtime system.
Function init — Main entry point for launching the HPX runtime system.
Function init — Main entry point for launching the HPX runtime system.
Function init — Main entry point for launching the HPX runtime system.
Function init — Main entry point for launching the HPX runtime system.
Header <hpx/hpx_start.hpp>
Function start — Main non-blocking entry point for launching the HPX runtime system.
Function start — Main non-blocking entry point for launching the HPX runtime system.
Function start — Main non-blocking entry point for launching the HPX runtime system.
Function start — Main non-blocking entry point for launching the HPX runtime system.
Function start — Main non-blocking entry point for launching the HPX runtime system.
Function start — Main non-blocking entry point for launching the HPX runtime system.
Function start — Main non-blocking entry point for launching the HPX runtime system.
Function start — Main non-blocking entry point for launching the HPX runtime system.
Function start — Main non-blocking entry point for launching the HPX runtime system.
Function start — Main non-blocking entry point for launching the HPX runtime system.
Function start — Main non-blocking entry point for launching the HPX runtime system.
Function start — Main non-blocking entry point for launching the HPX runtime system.
Function start — Main non-blocking entry point for launching the HPX runtime system.
Header <hpx/lcos/broadcast.hpp>
Function template broadcast — Perform a distributed broadcast operation.
Function template broadcast_apply — Perform an asynchronous (fire&forget) distributed broadcast operation.
Function template broadcast_with_index — Perform a distributed broadcast operation.
Function template broadcast_apply_with_index — Perform an asynchronous (fire&forget) distributed broadcast operation.
Header <hpx/lcos/fold.hpp>
Function template fold — Perform a distributed fold operation.
Function template fold_with_index — Perform a distributed folding operation.
Function template inverse_fold — Perform a distributed inverse folding operation.
Function template inverse_fold_with_index — Perform a distributed inverse folding operation.
Header <hpx/lcos/gather.hpp>
Function template gather_here
Function template gather_there
Function template gather_here
Function template gather_there
Header <hpx/lcos/wait_all.hpp>
Function template wait_all
Function template wait_all
Function template wait_all
Function template wait_all_n
Header <hpx/lcos/wait_any.hpp>
Function template wait_any
Function template wait_any
Function template wait_any
Function template wait_any
Function template wait_any_n
Header <hpx/lcos/wait_each.hpp>
Function template wait_each
Function template wait_each
Function template wait_each
Function template wait_each_n
Header <hpx/lcos/wait_some.hpp>
Function template wait_some
Function template wait_some
Function template wait_some
Function template wait_some_n
Header <hpx/lcos/when_all.hpp>
Function template when_all
Function template when_all
Function template when_all
Function template when_all_n
Header <hpx/lcos/when_any.hpp>
Struct template when_any_result
Function template when_any
Function template when_any
Function template when_any
Function template when_any_n
Header <hpx/lcos/when_each.hpp>
Function template when_each
Function template when_each
Function template when_each
Function template when_each_n
Header <hpx/lcos/when_some.hpp>
Struct template when_some_result
Function template when_some
Function template when_some
Function template when_some
Function template when_some
Function template when_some_n
Header <hpx/lcos_fwd.hpp>
Header <hpx/parallel/algorithms/adjacent_difference.hpp>
Function template adjacent_difference
Function template adjacent_difference
Header <hpx/parallel/algorithms/adjacent_find.hpp>
Function template adjacent_find
Function template adjacent_find
Header <hpx/parallel/algorithms/all_any_none.hpp>
Function template none_of
Function template any_of
Function template all_of
Header <hpx/parallel/algorithms/copy.hpp>
Function template copy
Function template copy_n
Function template copy_if
Header <hpx/parallel/container_algorithms/copy.hpp>
Function template copy
Function template copy_if
Header <hpx/parallel/algorithms/count.hpp>
Function template count
Function template count_if
Header <hpx/parallel/algorithms/equal.hpp>
Function template equal
Function template equal
Function template equal
Function template equal
Header <hpx/parallel/algorithms/exclusive_scan.hpp>
Function template exclusive_scan
Function template exclusive_scan
Header <hpx/parallel/algorithms/fill.hpp>
Function template fill
Function template fill_n
Header <hpx/parallel/algorithms/find.hpp>
Function template find
Function template find_if
Function template find_if_not
Function template find_end
Function template find_end
Function template find_first_of
Function template find_first_of
Header <hpx/parallel/algorithms/for_each.hpp>
Function template for_each_n
Function template for_each
Header <hpx/parallel/container_algorithms/for_each.hpp>
Function template for_each
Header <hpx/parallel/algorithms/generate.hpp>
Function template generate
Function template generate_n
Header <hpx/parallel/container_algorithms/generate.hpp>
Function template generate
Header <hpx/parallel/algorithms/includes.hpp>
Function template includes
Function template includes
Header <hpx/parallel/algorithms/inclusive_scan.hpp>
Function template inclusive_scan
Function template inclusive_scan
Function template inclusive_scan
Header <hpx/parallel/algorithms/inner_product.hpp>
Function template inner_product
Function template inner_product
Header <hpx/parallel/algorithms/is_partitioned.hpp>
Function template is_partitioned
Header <hpx/parallel/algorithms/is_sorted.hpp>
Function template is_sorted
Function template is_sorted
Function template is_sorted_until
Function template is_sorted_until
Header <hpx/parallel/algorithms/lexicographical_compare.hpp>
Function template lexicographical_compare
Function template lexicographical_compare
Header <hpx/parallel/algorithms/minmax.hpp>
Function template min_element
Function template max_element
Function template minmax_element
Header <hpx/parallel/container_algorithms/minmax.hpp>
Function template min_element
Function template max_element
Function template minmax_element
Header <hpx/parallel/algorithms/mismatch.hpp>
Function template mismatch
Function template mismatch
Function template mismatch
Function template mismatch
Header <hpx/parallel/algorithms/move.hpp>
Function template move
Header <hpx/parallel/algorithms/reduce.hpp>
Function template reduce
Function template reduce
Function template reduce
Header <hpx/lcos/reduce.hpp>
Function template reduce — Perform a distributed reduction operation.
Function template reduce_with_index — Perform a distributed reduction operation.
Header <hpx/parallel/algorithms/remove_copy.hpp>
Function template remove_copy
Function template remove_copy_if
Header <hpx/parallel/container_algorithms/remove_copy.hpp>
Function template remove_copy
Function template remove_copy_if
Header <hpx/parallel/algorithms/replace.hpp>
Function template replace
Function template replace_if
Function template replace_copy
Function template replace_copy_if
Header <hpx/parallel/container_algorithms/replace.hpp>
Function template replace
Function template replace_if
Function template replace_copy
Function template replace_copy_if
Header <hpx/parallel/algorithms/reverse.hpp>
Function template reverse
Function template reverse_copy
Header <hpx/parallel/container_algorithms/reverse.hpp>
Function template reverse
Function template reverse_copy
Header <hpx/parallel/algorithms/rotate.hpp>
Function template rotate
Function template rotate_copy
Header <hpx/parallel/container_algorithms/rotate.hpp>
Function template rotate
Function template rotate_copy
Header <hpx/parallel/algorithms/search.hpp>
Function template search
Function template search
Function template search_n
Function template search_n
Header <hpx/parallel/algorithms/set_difference.hpp>
Function template set_difference
Function template set_difference
Header <hpx/parallel/algorithms/set_intersection.hpp>
Function template set_intersection
Function template set_intersection
Header <hpx/parallel/algorithms/set_symmetric_difference.hpp>
Function template set_symmetric_difference
Function template set_symmetric_difference
Header <hpx/parallel/algorithms/set_union.hpp>
Function template set_union
Function template set_union
Header <hpx/parallel/container_algorithms/sort.hpp>
Function template sort
Header <hpx/parallel/algorithms/sort_by_key.hpp>
Function template sort_by_key
Header <hpx/parallel/algorithms/swap_ranges.hpp>
Function template swap_ranges
Header <hpx/parallel/algorithms/transform.hpp>
Function template transform
Function template transform
Function template transform
Header <hpx/parallel/container_algorithms/transform.hpp>
Function template transform
Function template transform
Function template transform
Header <hpx/parallel/algorithms/transform_exclusive_scan.hpp>
Function template transform_exclusive_scan
Header <hpx/parallel/algorithms/transform_inclusive_scan.hpp>
Function template transform_inclusive_scan
Function template transform_inclusive_scan
Header <hpx/parallel/algorithms/transform_reduce.hpp>
Function template transform_reduce
Header <hpx/parallel/algorithms/uninitialized_copy.hpp>
Function template uninitialized_copy
Function template uninitialized_copy_n
Header <hpx/parallel/algorithms/uninitialized_fill.hpp>
Function template uninitialized_fill
Function template uninitialized_fill_n
Header <hpx/parallel/execution_policy.hpp>
Struct template rebind_executor
Struct sequential_task_execution_policy
Struct template sequential_task_execution_policy_shim
Struct sequential_execution_policy
Struct template sequential_execution_policy_shim
Struct parallel_task_execution_policy
Struct template parallel_task_execution_policy_shim
Struct parallel_execution_policy
Struct template parallel_execution_policy_shim
Struct parallel_vector_execution_policy
Struct template is_rebound_execution_policy
Struct template is_execution_policy
Struct template is_parallel_execution_policy
Struct template is_sequential_execution_policy
Struct template is_async_execution_policy
Class execution_policy
Global task
Global seq — Default sequential execution policy object.
Global par — Default parallel execution policy object.
Global par_vec — Default vector execution policy object.
Header <hpx/parallel/executors/auto_chunk_size.hpp>
Struct auto_chunk_size
Header <hpx/parallel/executors/dynamic_chunk_size.hpp>
Struct dynamic_chunk_size
Header <hpx/parallel/executors/executor_parameter_traits.hpp>
Struct sequential_executor_parameters
Type definition executor_parameters_type
Function template variable_chunk_size
Function template get_chunk_size
Function template reset_thread_distribution
Function processing_units_count
Header <hpx/parallel/executors/executor_traits.hpp>
Struct sequential_execution_tag
Struct parallel_execution_tag
Struct vector_execution_tag
Struct template executor_traits
Header <hpx/parallel/executors/guided_chunk_size.hpp>
Struct guided_chunk_size
Header <hpx/parallel/executors/parallel_executor.hpp>
Struct parallel_executor
Header <hpx/parallel/executors/sequential_executor.hpp>
Struct sequential_executor
Header <hpx/parallel/executors/service_executors.hpp>
Struct service_executor
Header <hpx/parallel/executors/static_chunk_size.hpp>
Struct static_chunk_size
Header <hpx/parallel/executors/thread_pool_executors.hpp>
Type definition local_priority_queue_executor
Header <hpx/parallel/executors/timed_executor_traits.hpp>
Struct template timed_executor_traits
Header <hpx/parallel/task_block.hpp>
Class task_canceled_exception
Class template task_block
Function template define_task_block
Function template define_task_block
Function template define_task_block_restore_thread
Function template define_task_block_restore_thread
Header <hpx/performance_counters/manage_counter_type.hpp>
Function install_counter_type — Install a new generic performance counter type in a way, which will uninstall it automatically during shutdown.
Function install_counter_type — Install a new performance counter type in a way, which will uninstall it automatically during shutdown.
Function install_counter_type — Install a new performance counter type in a way, which will uninstall it automatically during shutdown.
Function install_counter_type — Install a new generic performance counter type in a way, which will uninstall it automatically during shutdown.
Header <hpx/runtime/actions/basic_action.hpp>
Macro HPX_REGISTER_ACTION_DECLARATION — Declare the necessary component action boilerplate code. / / The macro HPX_REGISTER_ACTION_DECLARATION can be used to declare all the / boilerplate code which is required for proper functioning of component / actions in the context of HPX. / / The parameter action is the type of the action to declare the / boilerplate for. / / This macro can be invoked with an optional second parameter. This parameter / specifies a unique name of the action to be used for serialization purposes. / The second parameter has to be specified if the first parameter is not / usable as a plain (non-qualified) C++ identifier, i.e. the first parameter / contains special characters which cannot be part of a C++ identifier, such / as '<', '>', or ':'. / /.
Macro HPX_REGISTER_ACTION — Define the necessary component action boilerplate code.
Macro HPX_REGISTER_ACTION_ID — Define the necessary component action boilerplate code and assign a predefined unique id to the action.
Header <hpx/runtime/actions/component_action.hpp>
Macro HPX_DEFINE_COMPONENT_ACTION — Registers a member function of a component as an action type with HPX.
Header <hpx/runtime/actions/plain_action.hpp>
Macro HPX_DEFINE_PLAIN_ACTION — Defines a plain action type.
Macro HPX_PLAIN_ACTION — */
Macro HPX_PLAIN_ACTION_ID — Defines a plain action type based on the given function func and registers it with HPX.
Header <hpx/runtime/agas_fwd.hpp>
Header <hpx/runtime/applier_fwd.hpp>
Function get_applier
Header <hpx/runtime/basename_registration.hpp>
Function find_all_from_basename
Function find_from_basename
Function find_from_basename — Return registered id from the given base name and sequence number.
Function register_with_basename — Register the given id using the given base name.
Function template register_with_basename
Function template register_with_basename
Function unregister_with_basename — Unregister the given id using the given base name.
Header <hpx/runtime/components/binpacking_distribution_policy.hpp>
Struct binpacking_distribution_policy
Global default_binpacking_counter_name
Global binpacked
Header <hpx/runtime/components/colocating_distribution_policy.hpp>
Struct colocating_distribution_policy
Global colocated
Header <hpx/runtime/components/component_factory.hpp>
Macro HPX_REGISTER_COMPONENT — Define a component factory for a component type.
Header <hpx/runtime/components/copy_component.hpp>
Function template copy — Copy given component to the specified target locality.
Function template copy — Copy given component to the specified target locality.
Function template copy — Copy given component to the specified target locality.
Header <hpx/runtime/components/default_distribution_policy.hpp>
Struct default_distribution_policy
Global default_layout
Header <hpx/runtime/components/migrate_component.hpp>
Function template migrate
Function template migrate
Function template migrate
Function template migrate
Header <hpx/runtime/components/new.hpp>
Function template new_ — Create one or more new instances of the given Component type on the specified locality.
Function template new_ — Create multiple new instances of the given Component type on the specified locality.
Function template new_ — Create one or more new instances of the given Component type based on the given distribution policy.
Function template new_ — Create multiple new instances of the given Component type on the localities as defined by the given distribution policy.
Header <hpx/runtime/components_fwd.hpp>
Header <hpx/runtime/find_here.hpp>
Function find_here — Return the global id representing this locality.
Header <hpx/runtime/get_locality_id.hpp>
Function get_locality_id — Return the number of the locality this function is being called from.
Header <hpx/runtime/get_locality_name.hpp>
Function get_locality_name — Return the name of the locality this function is called on.
Function get_locality_name — Return the name of the referenced locality.
Header <hpx/runtime/get_os_thread_count.hpp>
Function get_os_thread_count — Return the number of worker OS- threads used by the given executor to execute HPX threads.
Header <hpx/runtime/get_ptr.hpp>
Function template get_ptr — Returns a future referring to a the pointer to the underlying memory of a component.
Function template get_ptr_sync — Returns the pointer to the underlying memory of a component.
Header <hpx/runtime/get_worker_thread_num.hpp>
Function get_worker_thread_num — Return the number of the current OS-thread running in the runtime instance the current HPX-thread is executed with.
Header <hpx/runtime/launch_policy.hpp>
Type launch
Header <hpx/runtime/naming/unmanaged.hpp>
Function unmanaged
Header <hpx/runtime/naming_fwd.hpp>
Global invalid_locality_id
Header <hpx/runtime/parcelset_fwd.hpp>
Header <hpx/runtime/runtime_mode.hpp>
Type runtime_mode
Function get_runtime_mode_name
Header <hpx/runtime/set_parcel_write_handler.hpp>
Type definition parcel_write_handler_type
Function set_parcel_write_handler
Header <hpx/runtime/threads/thread_data_fwd.hpp>
Function get_self
Function get_self_ptr
Function get_ctx_ptr
Function get_self_ptr_checked
Function get_self_id
Function get_parent_id
Function get_parent_phase
Function get_parent_locality_id
Function get_self_component_id
Function get_thread_manager
Function get_thread_count
Header <hpx/runtime/threads/thread_enums.hpp>
Type thread_state_enum
Type thread_priority
Type thread_state_ex_enum
Header <hpx/runtime/threads_fwd.hpp>
Header <hpx/runtime/trigger_lco.hpp>
Function trigger_lco_event — Trigger the LCO referenced by the given id.
Function trigger_lco_event — Trigger the LCO referenced by the given id.
Function trigger_lco_event — Trigger the LCO referenced by the given id.
Function trigger_lco_event — Trigger the LCO referenced by the given id.
Function template set_lco_value — Set the result value for the LCO referenced by the given id.
Function template set_lco_value — Set the result value for the LCO referenced by the given id.
Function template set_lco_value — Set the result value for the LCO referenced by the given id.
Function template set_lco_value — Set the result value for the LCO referenced by the given id.
Function set_lco_error — Set the error state for the LCO referenced by the given id.
Function set_lco_error — Set the error state for the LCO referenced by the given id. (naming::id_type const& id,
Function set_lco_error — Set the error state for the LCO referenced by the given id.
Function set_lco_error — Set the error state for the LCO referenced by the given id. (naming::id_type const& id,
Function set_lco_error — Set the error state for the LCO referenced by the given id.
Function set_lco_error — Set the error state for the LCO referenced by the given id. (naming::id_type const& id,
Function set_lco_error — Set the error state for the LCO referenced by the given id.
Function set_lco_error — Set the error state for the LCO referenced by the given id. (naming::id_type const& id,
Header <hpx/runtime_fwd.hpp>
Function get_runtime
Function get_runtime_instance_number
Terminology
People

The STE||AR Group (Systems Technology, Emergent Parallelism, and Algorithm Research) is an international research group with the goal of promoting the development of scalable parallel applications by providing a community for ideas, a framework for collaboration, and a platform for communicating these concepts to the broader community. The main contributors to HPX in the STE||AR Group are researchers from Louisiana State University (LSU)'s Center for Computation and Technology (CCT) and the Friedrich-Alexander University Erlangen-Nuremberg (FAU)'s Department of Computer Science 3 - Computer Architecture. For a full list of people working in this group and participating in writing this documentation see People.

This documentation is automatically generated for HPX V0.9.12 (from Git commit: 316b9468de9f5630b70d2a0ea3e63b09104a742a) by the Boost QuickBook and AutoIndex documentation tools. QuickBook and AutoIndex can be found in the collection of Boost Tools.

History

The development of High Performance ParalleX (HPX) started back in 2007. At that point, Hartmut Kaiser became interested in the work done by the ParalleX group at the Center for Computation and Technology (CCT), a multi-disciplinary research institute at Louisiana State University (LSU). The ParalleX group was developing a new and experimental execution model for future high performance computing architectures called ParalleX. The first implementations were crude at best, and we had to dismiss those designs entirely. However, over time we learned quite a bit about how to design a parallel distributed runtime system which implements the ideas of ParalleX.

Our goal is to create a high quality, freely available, open source implementation of the ParalleX model for conventional systems, such as classic Linux based Beowulf clusters or multi-socket highly parallel SMP nodes. At the same time, we want to have a very modular and well designed runtime system architecture which would allow us to port our implementation onto new computer system architectures. We want to use real world applications to drive the development of the runtime system, coining out required functionalities and converging onto an stable API which will provide a smooth migration path for developers. The API exposed by HPX is modelled after the interfaces defined by the C++11 ISO standard and adheres to the programming guidelines used by the Boost collection of C++ libraries.

From the very beginning, this endeavour has been a group effort. In addition to a handful of interested researchers, there have always been graduate and undergraduate students participating in the discussions, design, and implementation of HPX. In 2011 we decided to formalize our collective research efforts by creating the STE||AR group. STE||AR stands for Systems Technology, Emergent Parallelism, and Algorithm Research, which describes the three main focal points we center our work around.

To learn more about STE||AR and ParalleX, see People and Introduction.

How to use this manual

Some icons are used to mark certain topics indicative of their relevance. These icons precede some text to indicate:

Table 1. Icons

Icon

Name

Meaning

Note

Generally useful information (an aside that doesn't fit in the flow of the text)

Tip

Suggestion on how to do something (especially something that is not obvious)

Important

Important note on something to take particular notice of

Caution

Take special care with this - it may not be what you expect and may cause bad results


The following table describes the syntax that will be used to refer to functions and classes throughout the manual:

Table 2. Syntax for Code References

Syntax

Meaning

foo()

The function foo

foo<>()

The template function foo (used only for template functions that require explicit parameters)

foo

The class foo

foo<>

The class template foo


Support

Please feel free to direct questions to HPX's mailing list: hpx-users@stellar.cct.lsu.edu or log onto our IRC channel which can be found at #ste||ar at Freenode.

General Changes

This release of HPX is mostly a bug-fix release. Besides cleaning up many minor issues and some API inconsistencies, we have also added some new functionalities. The most notable addition is the now full implementation of object migration (i.e. the ability to transparently move HPX components to a different compute node).

  • We have fixed a couple of issues in AGAS and the parcel layer which have caused hangs, segmentation faults at exit, and a slowdown of applications over time. Fixing those has significantly increased the overall stability and performance of distributed runs.
  • We have started to add parallel algorithm overloads based on the C++ Extensions for Ranges (N4560) proposal. This also includes the addition of projections to the existing algorithms. Please see IS#1668 for a list of algorithms which have been adapted to N4560.
  • Added hpx::make_future<R>(future<T> &&) allowing to convert a future of any type T into a future of any other type R, either based on default conversion rules of the embedded types or using a given explicit conversion function.
  • We finally finished the implementation of transparent migration of components to another locality. It is now possible to trigger a migration operation without 'stopping the world' for the object to migrate. HPX will make sure that no work is being performed on an object before it is migrated and that all subsequently scheduled work for the migrated object will be transparently forwarded to the new locality. Please note that the global id of the migrated object does not change, thus the application will not have to be changed in any way to support this new functionality. Please note that this feature is currently considered experimental. See IS#559 and PR#1966 for more details.
  • The hpx::dataflow facility is now usable with actions. Similarily to hpx::async, actions can be specified as an explicit template argument (hpx::dataflow<Action>(target, ...)) or as the first argument (hpx::dataflow(Action(), target, ...)). We have also enabled the use of distribution policies as the target for dataflow invocations. Please see IS#1265 and PR#1912 for more information.
  • Adding overloads of gather_here and gather_there to accept the plain values of the data to gather (in addition to the existing overloads expecting futures).
Breaking Changes
  • In order to move the dataflow facility to namespace hpx we added a definition of hpx::dataflow which might create ambiguities in existing codes. The previous definition of this facility (hpx::lcos::local::dataflow) has been deprecated and is available only if the constant HPX_WITH_LOCAL_DATAFLOW_COMPATIBILITY is defined at configuration time. Please explicitly qualify all uses of the dataflow facility if you enable this compatibility setting and encounter ambiguities.
  • The adaptation of the C++ Extensions for Ranges (N4560) proposal imposes some breaking changes related to the return types of some of the parallel algorithms. Please see IS#1668 for a list of algorithms which have already been adapted.
  • The facility hpx::lcos::make_future_void() has been replaced by hpx::make_future<void>().
Bug Fixes (Closed Tickets)

Here is a list of the important tickets we closed for this release.

  • PR#2004 - Fixing 2000
  • PR#2003 - Adding generation parameter to gather to enable using it more than once
  • PR#2002 - Turn on position independent code to solve link problem with hpx_init
  • IS#2001 - Gathering more than once segfaults
  • IS#2000 - Undefined reference to hpx::assertion_failed
  • PR#1998 - Detect unknown command line options
  • PR#1997 - Extending thread description
  • PR#1996 - Adding natvis files to solution (MSVC only)
  • IS#1995 - Command line handling does not produce error
  • PR#1994 - Possible missing include in test_utils.hpp
  • PR#1993 - Add missing LANGUAGES tag to a hpx_add_compile_flag_if_available() call in CMakeLists.txt
  • PR#1992 - Fixing shared_executor_test
  • PR#1991 - Making sure the winsock library is properly initialized
  • PR#1990 - Fixing bind_test placeholder ambiguity coming from boost-1.60
  • PR#1989 - Performance tuning
  • PR#1987 - Make configurable size of internal storage in util::function
  • PR#1986 - AGAS Refactoring+1753 Cache mods
  • PR#1985 - Adding missing task_block::run() overload taking an executor
  • PR#1984 - Adding an optimized LRU Cache implementation
  • PR#1983 - Avoid invoking migration table look up for all objects
  • PR#1981 - Replacing uintptr_t (which is not defined everywhere) with std::size_t
  • PR#1980 - Optimizing LCO continuations
  • PR#1979 - Fixing Cori
  • PR#1978 - Fix test check that got broken in hasty fix to memory overflow
  • PR#1977 - Refactor action traits
  • PR#1976 - Fixes typo in README.rst
  • PR#1975 - Reduce size of benchmark timing arrays to fix test failures
  • PR#1974 - Add action to update data owned by the partitioned_vector component
  • PR#1972 - Adding partitioned_vector SPMD example
  • PR#1971 - Fixing 1965
  • PR#1970 - Papi fixes
  • PR#1969 - Fixing continuation recursions to not depend on fixed amount of recursions
  • PR#1968 - More segmented algorithms
  • IS#1967 - Simplify component implementations
  • PR#1966 - Migrate components
  • IS#1964 - fatal error: 'boost/lockfree/detail/branch_hints.hpp' file not found
  • IS#1962 - parallel:copy_if has race condition when used on in place arrays
  • PR#1963 - Fixing Static Parcelport initialization
  • PR#1961 - Fix function target
  • IS#1960 - Papi counters don't reset
  • PR#1959 - Fixing 1958
  • IS#1958 - inclusive_scan gives incorrect results with non-commutative operator
  • PR#1957 - Fixing #1950
  • PR#1956 - Sort by key example
  • PR#1955 - Adding regression test for #1946: Hang in wait_all() in distributed run
  • IS#1954 - HPX releases should not use -Werror
  • PR#1953 - Adding performance analysis for AGAS cache
  • PR#1952 - Adapting test for explicit variadics to fail for gcc 4.6
  • PR#1951 - Fixing memory leak
  • IS#1950 - Simplify external builds
  • PR#1949 - Fixing yet another lock that is being held during suspension
  • PR#1948 - Fixed container algorithms for Intel
  • PR#1947 - Adding workaround for tagged_tuple
  • IS#1946 - Hang in wait_all() in distributed run
  • PR#1945 - Fixed container algorithm tests
  • IS#1944 - assertion 'p.destination_locality() == hpx::get_locality()' failed
  • PR#1943 - Fix a couple of compile errors with clang
  • PR#1942 - Making parcel coalescing functional
  • IS#1941 - Re-enable parcel coalescing
  • PR#1940 - Touching up make_future
  • PR#1939 - Fixing problems in over-subscription management in the resource manager
  • PR#1938 - Removing use of unified Boost.Thread header
  • PR#1937 - Cleaning up the use of Boost.Accumulator headers
  • PR#1936 - Making sure interval timer is started for aggregating performance counters
  • PR#1935 - Tagged results
  • PR#1934 - Fix remote async with deferred launch policy
  • IS#1933 - Floating point exception in statistics_counter<boost::accumulators::tag::mean>::get_counter_value
  • PR#1932 - Removing superfluous includes of boost/lockfree/detail/branch_hints.hpp
  • PR#1931 - fix compilation with clang 3.8.0
  • IS#1930 - Missing online documentation for HPX 0.9.11
  • PR#1929 - LWG2485: get() should be overloaded for const tuple&&
  • PR#1928 - Revert "Using ninja for circle-ci builds"
  • PR#1927 - Using ninja for circle-ci builds
  • PR#1926 - Fixing serialization of std::array
  • IS#1925 - Issues with static HPX libraries
  • IS#1924 - Peformance degrading over time
  • IS#1923 - serialization of std::array appears broken in latest commit
  • PR#1922 - Container algorithms
  • PR#1921 - Tons of smaller quality improvements
  • IS#1920 - Seg fault in hpx::serialization::output_archive::add_gid when running octotiger
  • IS#1919 - Intel 15 compiler bug preventing HPX build
  • PR#1918 - Address sanitizer fixes
  • PR#1917 - Fixing compilation problems of parallel::sort with Intel compilers
  • PR#1916 - Making sure code compiles if HPX_WITH_HWLOC=Off
  • IS#1915 - max_cores undefined if HPX_WITH_HWLOC=Off
  • PR#1913 - Add utility member functions for partitioned_vector
  • PR#1912 - Adding support for invoking actions to dataflow
  • PR#1911 - Adding first batch of container algorithms
  • PR#1910 - Keep cmake_module_path
  • PR#1909 - Fix mpirun with pbs
  • PR#1908 - Changing parallel::sort to return the last iterator as proposed by N4560
  • PR#1907 - Adding a minimum version for Open MPI
  • PR#1906 - Updates to the Release Procedure
  • PR#1905 - Fixing #1903
  • PR#1904 - Making sure std containers are cleared before serialization loads data
  • IS#1903 - When running octotiger, I get: assertion '(*new_gids_)[gid].size() == 1' failed: HPX(assertion_failure)
  • IS#1902 - Immediate crash when running hpx/octotiger with _GLIBCXX_DEBUG defined.
  • PR#1901 - Making non-serializable classes non-serializable
  • IS#1900 - Two possible issues with std::list serialization
  • PR#1899 - Fixing a problem with credit splitting as revealed by #1898
  • IS#1898 - Accessing component from locality where it was not created segfaults
  • PR#1897 - Changing parallel::sort to return the last iterator as proposed by N4560
  • IS#1896 - version 1.0?
  • PR#1894 - Add support for compilers that have thread_local
  • PR#1893 - Fixing 1890
  • PR#1892 - Adds typed future_type for executor_traits
  • PR#1891 - Fix wording in certain parallel algorithm docs
  • IS#1890 - Invoking papi counters give segfault
  • PR#1889 - Fixing problems as reported by clang-check
  • PR#1888 - WIP parallel is_heap
  • PR#1887 - Fixed resetting performance counters related to idle-rate, etc
  • IS#1886 - Run hpx with qsub does not work
  • PR#1885 - Warning cleaning pass
  • PR#1884 - Add missing parallel algorithm header
  • PR#1883 - Add feature test for thread_local on Clang for TLS
  • PR#1882 - Fix some redundant qualifiers
  • IS#1881 - Unable to compile Octotiger using HPX and Intel MPI on SuperMIC
  • IS#1880 - clang with libc++ on Linux needs TLS case
  • PR#1879 - Doc fixes for #1868
  • PR#1878 - Simplify functions
  • PR#1877 - Removing most usage of Boost.Config
  • PR#1876 - Add missing parallel algorithms to algorithm.hpp
  • PR#1875 - Simplify callables
  • PR#1874 - Address long standing FIXME on using std::unique_ptr with incomplete types
  • PR#1873 - Fixing 1871
  • PR#1872 - Making sure PBS environment uses specified node list even if no PBS_NODEFILE env is available
  • IS#1871 - Fortran checks should be optional
  • PR#1870 - Touch local::mutex
  • PR#1869 - Documentation refactoring based off #1868
  • PR#1867 - Embrace static_assert
  • PR#1866 - Fix #1803 with documentation refactoring
  • PR#1865 - Setting OUTPUT_NAME as target properties
  • PR#1863 - Use SYSTEM for boost includes
  • PR#1862 - Minor cleanups
  • PR#1861 - Minor Corrections for Release
  • PR#1860 - Fixing hpx gdb script
  • IS#1859 - reset_active_counters resets times and thread counts before some of the counters are evaluated
  • PR#1858 - Release V0.9.11
  • PR#1857 - removing diskperf example from 9.11 release
  • PR#1856 - fix return in packaged_task_base::reset()
  • IS#1842 - Install error: file INSTALL cannot find libhpx_parcel_coalescing.so.0.9.11
  • PR#1824 - Changing version on master to V0.9.12
  • PR#1818 - Fixing #1748
  • IS#1815 - seg fault in AGAS
  • IS#1803 - wait_all documentation
  • IS#1796 - Outdated documentation to be revised
  • IS#1753 - HPX performance degrades with time since execution begins
  • IS#1748 - All public HPX headers need to be self contained
  • IS#1523 - Remote async with deferred launch policy never executes
  • IS#1472 - Serialization issues
  • IS#1457 - Implement N4392: C++ Latches and Barriers
  • IS#1265 - Enable dataflow() to be usable with actions
  • IS#1236 - NUMA aware allocators
  • IS#559 - Add hpx::migrate facility
  • IS#279 - Refactor addressing_service into a base class and two derived classes

Our main focus for this release was the design and development of a coherent set of higher-level APIs exposing various types of parallelism to the application programmer. We introduced the concepts of an executor, which can be used to customize the where and when of execution of tasks in the context of parallelizing codes. We extended all APIs related to managing parallel tasks to support executors which gives the user the choce of either using one of the predefined executor types or to provide its own, possibly application specific, executor. We paid very close attention to align all of these changes with the existing C++ Standards documents or with the ongoing proposals for standardization.

This release is the first after our change to a new development policy. We switched all development to be strictly performed on branches only, all direct commits to our main branch (master) are prohibited. Any change has to go through a peer review before it will be merged to master. As a result the overall stability of our code base has significantly increased, the development process itself has been simplified. This change manifests itself in a large number of pull-requests which have been merged (please see below for a full list of closed issues and pull-requests). All in all for this release, we closed almost 100 issues and merged over 290 pull-requests. There have been over 1600 commits to the master branch since the last release.

General Changes
  • We are moving into the direction of unifying managed and simple components. As such, the classes hpx::components::component and hpx::components::component_base have been added which currently just forward to the currently existing simple component facilities. The examples have been converted to only use those two classes.
  • Added integration with the CircleCI hosted continuous integration service. This gives us constant and immediate feedback on the health of our master branch.
  • The compiler configuration subsystem in the build system has been reimplemented. Instead of using Boost.Config we now use our own lightweight set of cmake scripts to determine the available language and library features supported by the used compiler.
  • The API for creating instances of components has been consolidated. All component instances should be created using the hpx::new_<>() only. It allows to instantiate both, single component instances and multiple component instances. The placement of the created components can be controlled by special distribution policies. Please see the corresponding documentation outlining the use of hpx::new_<>().
  • Introduced four new distribution policies which can be used with many API functions which traditionally expected to be used with a locality id. The new distribution policies are:
  • The new distribution policies can now be also used with hpx::async. This change also deprecates hpx::async_colocated(id, ...) which now is replaced by a distribution policy: hpx::async(hpx::colocated(id), ...).
  • The hpx::vector and hpx::unordered_map data structures can now be used with the new distribution policies as well.
  • The parallel facility hpx::parallel::task_region has been renamed to hpx::parallel::task_block based on the changes in the corresponding standardization proposal N4411.
  • Added extensions to the parallel facility hpx::parallel::task_block allowing to combine a task_block with an execution policy. This implies a minor breaking change as the hpx::parallel::task_block is now a template.
  • Added new LCOs: hpx::lcos::latch and hpx::lcos::local::latch which semantically conform to the proposed std::latch (see N4399).
  • Added performance counters exposing data related to data transferred by input/output (filesystem) operations (thanks to Maciej Brodowicz).
  • Added performance counters allowing to track the number of action invocations (local and remote invocations).
  • Added new command line options --hpx:print-counter-at and --hpx:reset-counters.
  • The hpx::vector component has been renamed to hpx::partitioned_vector to make it explicit that the underlying memory is not contiguous.
  • Introduced a completely new and uniform higher-level parallelism API which is based on executors. All existing parallelism APIs have been adapted to this. We have added a large number of different executor types, such as a numa-aware executor, a this-thread executor, etc.
  • Added support for the MingW toolchain on Windows (thanks to Eric Lemanissier).
  • HPX now includes support for APEX, (Autonomic Performance Environment for eXascale). APEX is an instrumentation and software adaptation library that provides an interface to TAU profiling / tracing as well as runtime adaptation of HPX applications through policy definitions. For more information and documentation, please see https://github.com/khuck/xpress-apex. To enable APEX at configuration time, specify -DHPX_WITH_APEX=On. To also include support for TAU profiling, specify -DHPX_WITH_TAU=On and specify the -DTAU_ROOT, -DTAU_ARCH and -DTAU_OPTIONS cmake parameters.
  • We have implemented many more of the parallel algorithms. Please see IS#1141 for the list of all available parallel algorithms (thanks to Daniel Bourgeois and John Biddiscombe for contributing their work).
Breaking Changes
  • We are moving into the direction of unifying managed and simple components. In order to stop exposing the old facilities, all examples have been converted to use the new classes. The breaking change in this release is that performance counters are now a hpx::components::component_base instead of hpx::components::managed_component_base.
  • We removed the support for stackless threads. It turned out that there was no performance benefit when using stackless threads. As such, we decided to clean up our codebase. This feature was not documented.
  • The CMake project name has changed from 'hpx' to 'HPX' for consistency and compatibilty with naming conventions and other CMake projects. Generated config files go into <prefix>/lib/cmake/HPX and not <prefix>/lib/cmake/hpx.
  • The macro HPX_REGISTER_MINIMAL_COMPONENT_FACTORY has been deprecated. Please use HPX_REGISTER_COMPONENT instead. The old macro will be removed in the next release.
  • The obsolete distributing_factory and binpacking_factory components have been removed. The corresponding functionality is now provided by the hpx::new_<>() API function in conjunction with the hpx::default_layout and hpx::binpacking distribution policies (hpx::default_distribution_policy and hpx::binpacking_distribution_policy)
  • The API function hpx::new_colocated has been deprecated. Please use the consolidated API hpx::new_ in conjunction with the new hpx::colocated distribution policy (hpx::colocating_distribution_policy) instead. The old API function will still be available for at least one release of HPX if the configuration variable HPX_WITH_COLOCATED_BACKWARDS_COMPATIBILITY is enabled.
  • The API function hpx::async_colocated has been deprecated. Please use the consolidated API hpx::async in conjunction with the new hpx::colocated distribution policy (hpx::colocating_distribution_policy) instead. The old API function will still be available for at least one release of HPX if the configuration variable HPX_WITH_COLOCATED_BACKWARDS_COMPATIBILITY is enabled.
  • The obsolete remote_object component has been removed.
  • Replaced the use of Boost.Serialization with our own solution. While the new version is mostly compatible with Boost.Serialization, this change requires some minor code modifications in user code. For more information, please see the corresponding announcement on the hpx-users@stellar.cct.lsu.edu mailing list.
  • The names used by cmake to influence various configuration options have been unified. The new naming scheme relies on all configuration constants to start with HPX_WITH_..., while the preprocessor constant which is used at build time starts with HPX_HAVE_.... For instance, the former cmake command line -DHPX_MALLOC=... now has to be specified a -DHPX_WITH_MALLOC=... and will cause the preprocessor constant HPX_HAVE_MALLOC to be defined. The actual name of the constant (i.e. MALLOC) has not changed. Please see the corresponding documentation for more details (CMake Variables used to configure HPX).
  • The get_gid() functions exposed by the component base classes hpx::components::server::simple_component_base, hpx::components::server::managed_component_base, and hpx::components::server::fixed_component_base have been replaced by two new functions: get_unmanaged_id() and get_id(). To enable the old function name for backwards compatibility, use the cmake configuration option HPX_WITH_COMPONENT_GET_GID_COMPATIBILITY=On.
  • All functions which were named get_gid() but were returning hpx::id_type have been renamed to get_id(). To enable the old function names for backwards compatibility, use the cmake configuration option HPX_WITH_COMPONENT_GET_GID_COMPATIBILITY=On.
Bug Fixes (Closed Tickets)

Here is a list of the important tickets we closed for this release.

  • PR#1855 - Completely removing external/endian
  • PR#1854 - Don't pollute CMAKE_CXX_FLAGS through find_package()
  • PR#1853 - Updating CMake configuration to get correct version of TAU library
  • PR#1852 - Fixing Performance Problems with MPI Parcelport
  • PR#1851 - Fixing hpx_add_link_flag() and hpx_remove_link_flag()
  • PR#1850 - Fixing 1836, adding parallel::sort
  • PR#1849 - Fixing configuration for use of more than 64 cores
  • PR#1848 - Change default APEX version for release
  • PR#1847 - Fix client_base::then on release
  • PR#1846 - Removing broken lcos::local::channel from release
  • PR#1845 - Adding example demonstrating a possible safe-object implementation to release
  • PR#1844 - Removing stubs from accumulator examples
  • PR#1843 - Don't pollute CMAKE_CXX_FLAGS through find_package()
  • PR#1841 - Fixing client_base<>::then
  • PR#1840 - Adding example demonstrating a possible safe-object implementation
  • PR#1838 - Update version rc1
  • PR#1837 - Removing broken lcos::local::channel
  • PR#1835 - Adding exlicit move constructor and assignment operator to hpx::lcos::promise
  • PR#1834 - Making hpx::lcos::promise move-only
  • PR#1833 - Adding fedora docs
  • IS#1832 - hpx::lcos::promise<> must be move-only
  • PR#1831 - Fixing resource manager gcc5.2
  • PR#1830 - Fix intel13
  • PR#1829 - Unbreaking thread test
  • PR#1828 - Fixing #1620
  • PR#1827 - Fixing a memory management issue for the Parquet application
  • IS#1826 - Memory management issue in hpx::lcos::promise
  • PR#1825 - Adding hpx::components::component and hpx::components::component_base
  • PR#1823 - Adding git commit id to circleci build
  • PR#1822 - applying fixes suggested by clang 3.7
  • PR#1821 - Hyperlink fixes
  • PR#1820 - added parallel multi-locality sanity test
  • PR#1819 - Fixing #1667
  • IS#1817 - Hyperlinks generated by inspect tool are wrong
  • PR#1816 - Support hpxrx
  • PR#1814 - Fix async to dispatch to the correct locality in all cases
  • IS#1813 - async(launch::..., action(), ...) always invokes locally
  • PR#1812 - fixed syntax error in CMakeLists.txt
  • PR#1811 - Agas optimizations
  • PR#1810 - drop superfluous typedefs
  • PR#1809 - Allow HPX to be used as an optional package in 3rd party code
  • PR#1808 - Fixing #1723
  • PR#1807 - Making sure resolve_localities does not hang during normal operation
  • IS#1806 - Spinlock no longer movable and deletes operator '=', breaks MiniGhost
  • IS#1804 - register_with_basename causes hangs
  • PR#1801 - Enhanced the inspect tool to take user directly to the problem with hyperlinks
  • IS#1800 - Problems compiling application on smic
  • PR#1799 - Fixing cv exceptions
  • PR#1798 - Documentation refactoring & updating
  • PR#1797 - Updating the activeharmony CMake module
  • PR#1795 - Fixing cv
  • PR#1794 - Fix connect with hpx::runtime_mode_connect
  • PR#1793 - fix a wrong use of HPX_MAX_CPU_COUNT instead of HPX_HAVE_MAX_CPU_COUNT
  • PR#1792 - Allow for default constructed parcel instances to be moved
  • PR#1791 - Fix connect with hpx::runtime_mode_connect
  • IS#1790 - assertion 'action_.get()' failed: HPX(assertion_failure) when running Octotiger with pull request 1786
  • PR#1789 - Fixing discover_counter_types API function
  • IS#1788 - connect with hpx::runtime_mode_connect
  • IS#1787 - discover_counter_types not working
  • PR#1786 - Changing addressing_service to use std::unordered_map instead of std::map
  • PR#1785 - Fix is_iterator for container algorithms
  • PR#1784 - Adding new command line options:
  • PR#1783 - Minor changes for APEX support
  • PR#1782 - Drop legacy forwarding action traits
  • PR#1781 - Attempt to resolve the race between cv::wait_xxx and cv::notify_all
  • PR#1780 - Removing serialize_sequence
  • PR#1779 - Fixed #1501: hwloc configuration options are wrong for MIC
  • PR#1778 - Removing ability to enable/disable parcel handling
  • PR#1777 - Completely removing stackless threads
  • PR#1776 - Cleaning up util/plugin
  • PR#1775 - Agas fixes
  • PR#1774 - Action invocation count
  • PR#1773 - replaced MSVC variable with WIN32
  • PR#1772 - Fixing Problems in MPI parcelport and future serialization.
  • PR#1771 - Fixing intel 13 compiler errors related to variadic template template parameters for lcos::when_ tests
  • PR#1770 - Forwarding decay to std::
  • PR#1769 - Add more characters with special regex meaning to the existing patch
  • PR#1768 - Adding test for receive_buffer
  • PR#1767 - Making sure that uptime counter throws exception on any attempt to be reset
  • PR#1766 - Cleaning up code related to throttling scheduler
  • PR#1765 - Restricting thread_data to creating only with intrusive_pointers
  • PR#1764 - Fixing 1763
  • IS#1763 - UB in thread_data::operator delete
  • PR#1762 - Making sure all serialization registries/factories are unique
  • PR#1761 - Fixed #1751: hpx::future::wait_for fails a simple test
  • PR#1758 - Fixing #1757
  • IS#1757 - pinning not correct using --hpx:bind
  • IS#1756 - compilation error with MinGW
  • PR#1755 - Making output serialization const-correct
  • IS#1753 - HPX performance degrades with time since execution begins
  • IS#1752 - Error in AGAS
  • IS#1751 - hpx::future::wait_for fails a simple test
  • PR#1750 - Removing hpx_fwd.hpp includes
  • PR#1749 - Simplify result_of and friends
  • PR#1747 - Removed superfluous code from message_buffer.hpp
  • PR#1746 - Tuple dependencies
  • IS#1745 - Broken when_some which takes iterators
  • PR#1744 - Refining archive interface
  • PR#1743 - Fixing when_all when only a single future is passed
  • PR#1742 - Config includes
  • PR#1741 - Os executors
  • IS#1740 - hpx::promise has some problems
  • PR#1739 - Parallel composition with generic containers
  • IS#1738 - After building program and successfully linking to a version of hpx DHPX_DIR seems to be ignored
  • IS#1737 - Uptime problems
  • PR#1736 - added convenience c-tor and begin()/end() to serialize_buffer
  • PR#1735 - Config includes
  • PR#1734 - Fixed #1688: Add timer counters for tfunc_total and exec_total
  • IS#1733 - Add unit test for hpx/lcos/local/receive_buffer.hpp
  • PR#1732 - Renaming get_os_thread_count
  • PR#1731 - Basename registration
  • IS#1730 - Use after move of thread_init_data
  • PR#1729 - Rewriting channel based on new gate component
  • PR#1728 - Fixing #1722
  • PR#1727 - Fixing compile problems with apply_colocated
  • PR#1726 - Apex integration
  • PR#1725 - fixed test timeouts
  • PR#1724 - Renaming vector
  • IS#1723 - Drop support for intel compilers and gcc 4.4. based standard libs
  • IS#1722 - Add support for detecting non-ready futures before serialization
  • PR#1721 - Unifying parallel executors, initializing from launch policy
  • PR#1720 - dropped superfluous typedef
  • IS#1718 - Windows 10 x64, VS 2015 - Unknown CMake command "add_hpx_pseudo_target".
  • PR#1717 - Timed executor traits for thread-executors
  • PR#1716 - serialization of arrays didn't work with non-pod types. fixed
  • PR#1715 - List serialization
  • PR#1714 - changing misspellings
  • PR#1713 - Fixed distribution policy executors
  • PR#1712 - Moving library detection to be executed after feature tests
  • PR#1711 - Simplify parcel
  • PR#1710 - Compile only tests
  • PR#1709 - Implemented timed executors
  • PR#1708 - Implement parallel::executor_traits for thread-executors
  • PR#1707 - Various fixes to threads::executors to make custom schedulers work
  • PR#1706 - Command line option --hpx:cores does not work as expected
  • IS#1705 - command line option --hpx:cores does not work as expected
  • PR#1704 - vector deserialization is speeded up a little
  • PR#1703 - Fixing shared_mutes
  • IS#1702 - Shared_mutex does not compile with no_mutex cond_var
  • PR#1701 - Add distribution_policy_executor
  • PR#1700 - Executor parameters
  • PR#1699 - Readers writer lock
  • PR#1698 - Remove leftovers
  • PR#1697 - Fixing held locks
  • PR#1696 - Modified Scan Partitioner for Algorithms
  • PR#1695 - This thread executors
  • PR#1694 - Fixed #1688: Add timer counters for tfunc_total and exec_total
  • PR#1693 - Fix #1691: is_executor template specification fails for inherited executors
  • PR#1692 - Fixed #1662: Possible exception source in coalescing_message_handler
  • IS#1691 - is_executor template specification fails for inherited executors
  • PR#1690 - added macro for non-intrusive serialization of classes without a default c-tor
  • PR#1689 - Replace value_or_error with custom storage, unify future_data state
  • IS#1688 - Add timer counters for tfunc_total and exec_total
  • PR#1687 - Fixed interval timer
  • PR#1686 - Fixing cmake warnings about not existing pseudo target dependencies
  • PR#1685 - Converting partitioners to use bulk async execute
  • PR#1683 - Adds a tool for inspect that checks for character limits
  • PR#1682 - Change project name to (uppercase) HPX
  • PR#1681 - Counter shortnames
  • PR#1680 - Extended Non-intrusive Serialization to Ease Usage for Library Developers
  • PR#1679 - Working on 1544: More executor changes
  • PR#1678 - Transpose fixes
  • PR#1677 - Improve Boost compatibility check
  • PR#1676 - 1d stencil fix
  • IS#1675 - hpx project name is not HPX
  • PR#1674 - Fixing the MPI parcelport
  • PR#1673 - added move semantics to map/vector deserialization
  • PR#1672 - Vs2015 await
  • PR#1671 - Adapt transform for #1668
  • PR#1670 - Started to work on #1668
  • PR#1669 - Add this_thread_executors
  • IS#1667 - Apple build instructions in docs are out of date
  • PR#1666 - Apex integration
  • PR#1665 - Fixes an error with the whitespace check that showed the incorrect location of the error
  • IS#1664 - Inspect tool found incorrect endline whitespace
  • PR#1663 - Improve use of locks
  • IS#1662 - Possible exception source in coalescing_message_handler
  • PR#1661 - Added support for 128bit number serialization
  • PR#1660 - Serialization 128bits
  • PR#1659 - Implemented inner_product and adjacent_diff algos
  • PR#1658 - Add serialization for std::set (as there is for std::vector and std::map)
  • PR#1657 - Use of shared_ptr in io_service_pool changed to unique_ptr
  • IS#1656 - 1d_stencil codes all have wrong factor
  • PR#1654 - When using runtime_mode_connect, find the correct localhost public ip address
  • PR#1653 - Fixing 1617
  • PR#1652 - Remove traits::action_may_require_id_splitting
  • PR#1651 - Fixed performance counters related to AGAS cache timings
  • PR#1650 - Remove leftovers of traits::type_size
  • PR#1649 - Shorten target names on Windows to shorten used path names
  • PR#1648 - Fixing problems introduced by merging #1623 for older compilers
  • PR#1647 - Simplify running automatic builds on Windows
  • IS#1646 - Cache insert and update performance counters are broken
  • IS#1644 - Remove leftovers of traits::type_size
  • IS#1643 - Remove traits::action_may_require_id_splitting
  • PR#1642 - Adds spell checker to the inspect tool for qbk and doxygen comments
  • PR#1640 - First step towards fixing 688
  • PR#1639 - Re-apply remaining changes from limit_dataflow_recursion branch
  • PR#1638 - This fixes possible deadlock in the test ignore_while_locked_1485
  • PR#1637 - Fixing hpx::wait_all() invoked with two vector<future<T>>
  • PR#1636 - Partially re-apply changes from limit_dataflow_recursion branch
  • PR#1635 - Adding missing test for #1572
  • PR#1634 - Revert "Limit recursion-depth in dataflow to a configurable constant"
  • PR#1633 - Add command line option to ignore batch environment
  • PR#1631 - hpx::lcos::queue exhibits strange behavior
  • PR#1630 - Fixed endline_whitespace_check.cpp to detect lines with only whitespace
  • IS#1629 - Inspect trailing whitespace checker problem
  • PR#1628 - Removed meaningless const qualifiers. Minor icpc fix.
  • PR#1627 - Fixing the queue LCO and add example demonstrating its use
  • PR#1626 - Deprecating get_gid(), add get_id() and get_unmanaged_id()
  • PR#1625 - Allowing to specify whether to send credits along with message
  • IS#1624 - Lifetime issue
  • IS#1623 - hpx::wait_all() invoked with two vector<future<T>> fails
  • PR#1622 - Executor partitioners
  • PR#1621 - Clean up coroutines implementation
  • IS#1620 - Revert #1535
  • PR#1619 - Fix result type calculation for hpx::make_continuation
  • PR#1618 - Fixing RDTSC on Xeon/Phi
  • IS#1617 - hpx cmake not working when run as a subproject
  • IS#1616 - cmake problem resulting in RDTSC not working correctly for Xeon Phi creates very strange results for duration counters
  • IS#1615 - hpx::make_continuation requires input and output to be the same
  • PR#1614 - Fixed remove copy test
  • IS#1613 - Dataflow causes stack overflow
  • PR#1612 - Modified foreach partitioner to use bulk execute
  • PR#1611 - Limit recursion-depth in dataflow to a configurable constant
  • PR#1610 - Increase timeout for CircleCI
  • PR#1609 - Refactoring thread manager, mainly extracting thread pool
  • PR#1608 - Fixed running multiple localities without localities parameter
  • PR#1607 - More algorithm fixes to adjacentfind
  • IS#1606 - Running without localities parameter binds to bogus port range
  • IS#1605 - Too many serializations
  • PR#1604 - Changes the HPX image into a hyperlink
  • PR#1601 - Fixing problems with remove_copy algorithm tests
  • PR#1600 - Actions with ids cleanup
  • PR#1599 - Duplicate binding of global ids should fail
  • PR#1598 - Fixing array access
  • PR#1597 - Improved the reliability of connecting/disconnecting localities
  • IS#1596 - Duplicate id binding should fail
  • PR#1595 - Fixing more cmake config constants
  • PR#1594 - Fixing preprocessor constant used to enable C++11 chrono
  • PR#1593 - Adding operator|() for hpx::launch
  • IS#1592 - Error (typo) in the docs
  • IS#1590 - CMake fails when CMAKE_BINARY_DIR contains '+'.
  • IS#1589 - Disconnecting a locality results in segfault using heartbeat example
  • PR#1588 - Fix doc string for config option HPX_WITH_EXAMPLES
  • PR#1586 - Fixing 1493
  • PR#1585 - Additional Check for Inspect Tool to detect Endline Whitespace
  • IS#1584 - Clean up coroutines implementation
  • PR#1583 - Adding a check for end line whitespace
  • PR#1582 - Attempt to fix assert firing after scheduling loop was exited
  • PR#1581 - Fixed adjacentfind_binary test
  • PR#1580 - Prevent some of the internal cmake lists from growing indefinitely
  • PR#1579 - Removing type_size trait, replacing it with special archive type
  • IS#1578 - Remove demangle_helper
  • PR#1577 - Get ptr problems
  • IS#1576 - Refactor async, dataflow, and future::then
  • PR#1575 - Fixing tests for parallel rotate
  • PR#1574 - Cleaning up schedulers
  • PR#1573 - Fixing thread pool executor
  • PR#1572 - Fixing number of configured localities
  • PR#1571 - Reimplement decay
  • PR#1570 - Refactoring async, apply, and dataflow APIs
  • PR#1569 - Changed range for mach-o library lookup
  • PR#1568 - Mark decltype support as required
  • PR#1567 - Removed const from algorithms
  • IS#1566 - CMAKE Configuration Test Failures for clang 3.5 on debian
  • PR#1565 - Dylib support
  • PR#1564 - Converted partitioners and some algorithms to use executors
  • PR#1563 - Fix several #includes for Boost.Preprocessor
  • PR#1562 - Adding configuration option disabling/enabling all message handlers
  • PR#1561 - Removed all occurrences of boost::move replacing it with std::move
  • IS#1560 - Leftover HPX_REGISTER_ACTION_DECLARATION_2
  • PR#1558 - Revisit async/apply SFINAE conditions
  • PR#1557 - Removing type_size trait, replacing it with special archive type
  • PR#1556 - Executor algorithms
  • PR#1555 - Remove the necessity to specify archive flags on the receiving end
  • PR#1554 - Removing obsolete Boost.Serialization macros
  • PR#1553 - Properly fix HPX_DEFINE_*_ACTION macros
  • PR#1552 - Fixed algorithms relying on copy_if implementation
  • PR#1551 - Pxfs - Modifying FindOrangeFS.cmake based on OrangeFS 2.9.X
  • IS#1550 - Passing plain identifier inside HPX_DEFINE_PLAIN_ACTION_1
  • PR#1549 - Fixing intel14/libstdc++4.4
  • PR#1548 - Moving raw_ptr to detail namespace
  • PR#1547 - Adding support for executors to future.then
  • PR#1546 - Executor traits result types
  • PR#1545 - Integrate executors with dataflow
  • PR#1543 - Fix potential zero-copy for primarynamespace::bulk_service_async et.al.
  • PR#1542 - Merging HPX0.9.10 into pxfs branch
  • PR#1541 - Removed stale cmake tests, unused since the great cmake refactoring
  • PR#1540 - Fix idle-rate on platforms without TSC
  • PR#1539 - Reporting situation if zero-copy-serialization was performed by a parcel generated from a plain apply/async
  • PR#1538 - Changed return type of bulk executors and added test
  • IS#1537 - Incorrect cpuid config tests
  • PR#1536 - Changed return type of bulk executors and added test
  • PR#1535 - Make sure promise::get_gid() can be called more than once
  • PR#1534 - Fixed async_callback with bound callback
  • PR#1533 - Updated the link in the documentation to a publically- accessible URL
  • PR#1532 - Make sure sync primitives are not copyable nor movable
  • PR#1531 - Fix unwrapped issue with future ranges of void type
  • PR#1530 - Serialization complex
  • IS#1528 - Unwrapped issue with future<void>
  • IS#1527 - HPX does not build with Boost 1.58.0
  • PR#1526 - Added support for boost.multi_array serialization
  • PR#1525 - Properly handle deferred futures, fixes #1506
  • PR#1524 - Making sure invalid action argument types generate clear error message
  • IS#1522 - Need serialization support for boost multi array
  • IS#1521 - Remote async and zero-copy serialization optimizations don't play well together
  • PR#1520 - Fixing UB whil registering polymorphic classes for serialization
  • PR#1519 - Making detail::condition_variable safe to use
  • PR#1518 - Fix when_some bug missing indices in its result
  • IS#1517 - Typo may affect CMake build system tests
  • PR#1516 - Fixing Posix context
  • PR#1515 - Fixing Posix context
  • PR#1514 - Correct problems with loading dynamic components
  • PR#1513 - Fixing intel glibc4 4
  • IS#1508 - memory and papi counters do not work
  • IS#1507 - Unrecognized Command Line Option Error causing exit status 0
  • IS#1506 - Properly handle deferred futures
  • PR#1505 - Adding #include - would not compile without this
  • IS#1502 - boost::filesystem::exists throws unexpected exception
  • IS#1501 - hwloc configuration options are wrong for MIC
  • PR#1504 - Making sure boost::filesystem::exists() does not throw
  • PR#1500 - Exit application on --hpx:version/-v and --hpx:info
  • PR#1498 - Extended task block
  • PR#1497 - Unique ptr serialization
  • PR#1496 - Unique ptr serialization (closed)
  • PR#1495 - Switching circleci build type to debug
  • IS#1494 - --hpx:version/-v does not exit after printing version information
  • IS#1493 - add an "hpx_" prefix to libraries and components to avoid name conflicts
  • IS#1492 - Define and ensure limitations for arguments to async/apply
  • PR#1489 - Enable idle rate counter on demand
  • PR#1488 - Made sure detail::condition_variable can be safely destroyed
  • PR#1487 - Introduced default (main) template implementation for ignore_while_checking
  • PR#1486 - Add HPX inspect tool
  • IS#1485 - ignore_while_locked doesn't support all Lockable types
  • PR#1484 - Docker image generation
  • PR#1483 - Move external endian library into HPX
  • PR#1482 - Actions with integer type ids
  • IS#1481 - Sync primitives safe destruction
  • IS#1480 - Move external/boost/endian into hpx/util
  • IS#1478 - Boost inspect violations
  • PR#1479 - Adds serialization for arrays; some futher/minor fixes
  • PR#1477 - Fixing problems with the Intel compiler using a GCC 4.4 std library
  • PR#1476 - Adding hpx::lcos::latch and hpx::lcos::local::latch
  • IS#1475 - Boost inspect violations
  • PR#1473 - Fixing action move tests
  • IS#1471 - Sync primitives should not be movable
  • PR#1470 - Removing hpx::util::polymorphic_factory
  • PR#1468 - Fixed container creation
  • IS#1467 - HPX application fail during finalization
  • IS#1466 - HPX doesn't pick up Torque's nodefile on SuperMIC
  • IS#1464 - HPX option for pre and post bootstrap performance counters
  • PR#1463 - Replacing async_colocated(id, ...) with async(colocated(id), ...)
  • PR#1462 - Consolidated task_region with N4411
  • PR#1461 - Consolidate inconsistent CMake option names
  • IS#1460 - Which malloc is actually used? or at least which one is HPX built with
  • IS#1459 - Make cmake configure step fail explicitly if compiler version is not supported
  • IS#1458 - Update parallel::task_region with N4411
  • PR#1456 - Consolidating new_<>()
  • IS#1455 - Replace async_colocated(id, ...) with async(colocated(id), ...)
  • PR#1454 - Removed harmful std::moves from return statements
  • PR#1453 - Use range-based for-loop instead of Boost.Foreach
  • PR#1452 - C++ feature tests
  • PR#1451 - When serializing, pass archive flags to traits::get_type_size
  • IS#1450 - traits:get_type_size needs archive flags to enable zero_copy optimizations
  • IS#1449 - "couldn't create performance counter" - AGAS
  • IS#1448 - Replace distributing factories with new_<T[]>(...)
  • PR#1447 - Removing obsolete remote_object component
  • PR#1446 - Hpx serialization
  • PR#1445 - Replacing travis with circleci
  • PR#1443 - Always stripping HPX command line arguments before executing start function
  • PR#1442 - Adding --hpx:bind=none to disable thread affinities
  • IS#1439 - Libraries get linked in multiple times, RPATH is not properly set
  • PR#1438 - Removed superfluous typedefs
  • IS#1437 - hpx::init() should strip HPX-related flags from argv
  • IS#1436 - Add strong scaling option to htts
  • PR#1435 - Adding async_cb, async_continue_cb, and async_colocated_cb
  • PR#1434 - Added missing install rule, removed some dead CMake code
  • PR#1433 - Add GitExternal and SubProject cmake scripts from eyescale/cmake repo
  • IS#1432 - Add command line flag to disable thread pinning
  • PR#1431 - Fix #1423
  • IS#1430 - Inconsistent CMake option names
  • IS#1429 - Configure setting HPX_HAVE_PARCELPORT_MPI is ignored
  • PR#1428 - Fixes #1419 (closed)
  • PR#1427 - Adding stencil_iterator and transform_iterator
  • PR#1426 - Fixes #1419
  • PR#1425 - During serialization memory allocation should honour allocator chunk size
  • IS#1424 - chunk allocation during serialization does not use memory pool/allocator chunk size
  • IS#1423 - Remove HPX_STD_UNIQUE_PTR
  • IS#1422 - hpx:threads=all allocates too many os threads
  • PR#1420 - added .travis.yml
  • IS#1419 - Unify enums: hpx::runtime::state and hpx::state
  • PR#1416 - Adding travis builder
  • IS#1414 - Correct directory for dispatch_gcc46.hpp iteration
  • IS#1410 - Set operation algorithms
  • IS#1389 - Parallel algorithms relying on scan partitioner break for small number of elements
  • IS#1325 - Exceptions thrown during parcel handling are not handled correctly
  • IS#1315 - Errors while running performance tests
  • IS#1309 - hpx::vector partitions are not easily extendable by applications
  • PR#1300 - Added serialization/de-serialization to examples.tuplespace
  • IS#1251 - hpx::threads::get_thread_count doesn't consider pending threads
  • IS#1008 - Decrease in application performance overtime; occasional spikes of major slowdown
  • IS#1001 - Zero copy serialization raises assert
  • IS#721 - Make HPX usable for Xeon Phi
  • IS#524 - Extend scheduler to support threads which can't be stolen
General Changes

This is the 12th official release of HPX. It coincides with the 7th anniversary of the first commit to our source code repository. Since then, we have seen over 12300 commits amounting to more than 220000 lines of C++ code.

The major focus of this release was to improve the reliability of large scale runs. We believe to have achieved this goal as we now can reliably run HPX applications on up to ~24k cores. We have also shown that HPX can be used with success for symmetric runs (applications using both, host cores and Intel Xeon/Phi coprocessors). This is a huge step forward in terms of the usability of HPX. The main focus of this work involed isolating the causes of the segmentation faults at start up and shut down. Many of these issues were discovered to be the result of the suspention of threads which hold locks.

A very important improvement introduced with this release is the refactoring of the code representing our parcel-port implementation. Parcel- ports can now be implemented by 3rd parties as independent plugins which are dynamically loaded at runtime (static linking of parcel-ports is also supported). This refactoring also includes a massive improvement of the performance of our existing parcel-ports. We were able to significantly reduce the networking latencies and to improve the available networking bandwidth. Please note that in this release we disabled the ibverbs and ipc parcel ports as those have not been ported to the new plugin system yet (see IS#839).

Another corner stone of this release is our work towards a complete implementation of N4409 (Working Draft, Technical Specification for C++ Extensions for Parallelism). This document defines a set of parallel algorithms to be added to the C++ standard library. We now have implemented about 75% of all specified parallel algorithms (see Parallel Algorithms for more details). We also implemented some extensions to N4409 allowing to invoke all of the algorithms asynchronously.

This release adds a first implementation of hpx::vector which is a distributed data structure closely aligned to the functionality of std::vector. The difference is that hpx::vector stores the data in partitions where the partitions can be distributed over different localities. We started to work on allowing to use the parallel algorithms with hpx::vector. At this point we have implemented only a few of the parallel algorithms to support distributed data structures (like hpx::vector) for testing purposes (see IS#1338 for a documentation of our progress).

Breaking Changes

With this release we put a lot of effort into changing the code base to be more compatible to C++11. These changes have caused the following issues for backward compatiblility:

  • Move to Variadics- All of the API now uses variadic templates. However, this change required to modify the argument sequence for some of the exiting API functions (hpx::async_continue, hpx::apply_continue, hpx::when_each, hpx::wait_each, synchronous invocation of actions).
  • Changes to Macros- We also removed the macros HPX_STD_FUNCTION and HPX_STD_TUPLE. This shouldn't affect any user code as we replaced HPX_STD_FUNCTION with hpx::util::function_nonser which was the default expansion used for this macro. All HPX API functions which expect a hpx::util::function_nonser (or a hpx::util::unique_function_nonser) can now be transparently called with a compatible std::function instead. Similarly, HPX_STD_TUPLE was replaced by its default expansion as well: hpx::util::tuple.
  • Changes to hpx::unique_future- hpx::unique_future, which was deprecated in the previous release for hpx::future is now completely removed from HPX. This completes the transition to a completely standards conforming implementation of hpx::future.
  • Changes to Supported Compilers- Finally, in order to utilize more C++11 semantics, we have officially dropped support for GCC 4.4 and MSVC 2012. Please see our Build Prerequisites page for more details.
Bug Fixes (Closed Tickets)

Here is a list of the important tickets we closed for this release.

  • IS#1402 - Internal shared_future serialization copies
  • IS#1399 - Build takes unusually long time...
  • IS#1398 - Tests using the scan partitioner are broken on at least gcc 4.7 and intel compiler
  • IS#1397 - Completely remove hpx::unique_future
  • IS#1396 - Parallel scan algorithms with different initial values
  • IS#1395 - Race Condition - 1d_stencil_8 - SuperMIC
  • IS#1394 - "suspending thread while at least one lock is being held" - 1d_stencil_8 - SuperMIC
  • IS#1393 - SEGFAULT in 1d_stencil_8 on SuperMIC
  • IS#1392 - Fixing #1168
  • IS#1391 - Parallel Algorithms for scan partitioner for small number of elements
  • IS#1387 - Failure with more than 4 localities
  • IS#1386 - Dispatching unhandled exceptions to outer user code
  • IS#1385 - Adding Copy algorithms, fixing parallel::copy_if
  • IS#1384 - Fixing 1325
  • IS#1383 - Fixed #504: Refactor Dataflow LCO to work with futures, this removes the dataflow component as it is obsolete
  • IS#1382 - is_sorted, is_sorted_until and is_partitioned algorithms
  • IS#1381 - fix for CMake versions prior to 3.1
  • IS#1380 - resolved warning in CMake 3.1 and newer
  • IS#1379 - Compilation error with papi
  • IS#1378 - Towards safer migration
  • IS#1377 - HPXConfig.cmake should include TCMALLOC_LIBRARY and TCMALLOC_INCLUDE_DIR
  • IS#1376 - Warning on uninitialized member
  • IS#1375 - Fixing 1163
  • IS#1374 - Fixing the MSVC 12 release builder
  • IS#1373 - Modifying parallel search algorithm for zero length searches
  • IS#1372 - Modifying parallel search algorithm for zero length searches
  • IS#1371 - Avoid holding a lock during agas::incref while doing a credit split
  • IS#1370 - --hpx:bind throws unexpected error
  • IS#1369 - Getting rid of (void) in loops
  • IS#1368 - Variadic templates support for tuple
  • IS#1367 - One last batch of variadic templates support
  • IS#1366 - Fixing symbolic namespace hang
  • IS#1365 - More held locks
  • IS#1364 - Add counters 1363
  • IS#1363 - Add thread overhead counters
  • IS#1362 - Std config removal
  • IS#1361 - Parcelport plugins
  • IS#1360 - Detuplify transfer_action
  • IS#1359 - Removed obsolete checks
  • IS#1358 - Fixing 1352
  • IS#1357 - Variadic templates support for runtime_support and components
  • IS#1356 - fixed coordinate test for intel13
  • IS#1355 - fixed coordinate.hpp
  • IS#1354 - Lexicographical Compare completed
  • IS#1353 - HPX should set Boost_ADDITIONAL_VERSIONS flags
  • IS#1352 - Error: Cannot find action '' in type registry: HPX(bad_action_code)
  • IS#1351 - Variadic templates support for appliers
  • IS#1350 - Actions simplification
  • IS#1349 - Variadic when and wait functions
  • IS#1348 - Added hpx_init header to test files
  • IS#1347 - Another batch of variadic templates support
  • IS#1346 - Segmented copy
  • IS#1345 - Attempting to fix hangs during shutdown
  • IS#1344 - Std config removal
  • IS#1343 - Removing various distribution policies for hpx::vector
  • IS#1342 - Inclusive scan
  • IS#1341 - Exclusive scan
  • IS#1340 - Adding parallel::count for distributed data structures, adding tests
  • IS#1339 - Update argument order for transform_reduce
  • IS#1337 - Fix dataflow to handle properly ranges of futures
  • IS#1336 - dataflow needs to hold onto futures passed to it
  • IS#1335 - Fails to compile with msvc14
  • IS#1334 - Examples build problem
  • IS#1333 - Distributed transform reduce
  • IS#1332 - Variadic templates support for actions
  • IS#1331 - Some ambiguous calls of map::erase have been prevented by adding additional check in locality constructor.
  • IS#1330 - Defining Plain Actions does not work as described in the documentation
  • IS#1329 - Distributed vector cleanup
  • IS#1328 - Sync docs and comments with code in hello_world example
  • IS#1327 - Typos in docs
  • IS#1326 - Documentation and code diverged in Fibonacci tutorial
  • IS#1325 - Exceptions thrown during parcel handling are not handled correctly
  • IS#1324 - fixed bandwidth calculation
  • IS#1323 - mmap() failed to allocate thread stack due to insufficient resources
  • IS#1322 - HPX fails to build aa182cf
  • IS#1321 - Limiting size of outgoing messages while coalescing parcels
  • IS#1320 - passing a future with launch::deferred in remote function call causes hang
  • IS#1319 - An exception when tries to specify number high priority threads with abp-priority
  • IS#1318 - Unable to run program with abp-priority and numa-sensitivity enabled
  • IS#1317 - N4071 Search/Search_n finished, minor changes
  • IS#1316 - Add config option to make -Ihpx.run_hpx_main!=1 the default
  • IS#1314 - Variadic support for async and apply
  • IS#1313 - Adjust when_any/some to the latest proposed interfaces
  • IS#1312 - Fixing #857: hpx::naming::locality leaks parcelport specific information into the public interface
  • IS#1311 - Distributed get'er/set'er_values for distributed vector
  • IS#1310 - Crashing in hpx::parcelset::policies::mpi::connection_handler::handle_messages() on SuperMIC
  • IS#1308 - Unable to execute an application with --hpx:threads
  • IS#1307 - merge_graph linking issue
  • IS#1306 - First batch of variadic templates support
  • IS#1305 - Create a compiler wrapper
  • IS#1304 - Provide a compiler wrapper for hpx
  • IS#1303 - Drop support for GCC44
  • IS#1302 - Fixing #1297
  • IS#1301 - Compilation error when tried to use boost range iterators with wait_all
  • IS#1298 - Distributed vector
  • IS#1297 - Unable to invoke component actions recursively
  • IS#1294 - HDF5 build error
  • IS#1275 - The parcelport implementation is non-optimal
  • IS#1267 - Added classes and unit tests for local_file, orangefs_file and pxfs_file
  • IS#1264 - Error "assertion '!m_fun' failed" randomly occurs when using TCP
  • IS#1254 - thread binding seems to not work properly
  • IS#1220 - parallel::copy_if is broken
  • IS#1217 - Find a better way of fixing the issue patched by #1216
  • IS#1168 - Starting HPX on Cray machines using aprun isn't working correctly
  • IS#1085 - Replace startup and shutdown barriers with broadcasts
  • IS#981 - With SLURM, --hpx:threads=8 should not be necessary
  • IS#857 - hpx::naming::locality leaks parcelport specific information into the public interface
  • IS#850 - "flush" not documented
  • IS#763 - Create buildbot instance that uses std::bind as HPX_STD_BIND
  • IS#680 - Convert parcel ports into a plugin system
  • IS#582 - Make exception thrown from HPX threads available from hpx::init
  • IS#504 - Refactor Dataflow LCO to work with futures
  • IS#196 - Don't store copies of the locality network metadata in the gva table
General Changes

We have had over 1500 commits since the last release and we have closed over 200 tickets (bugs, feature requests, pull requests, etc.). These are by far the largest numbers of commits and resolved issues for any of the HPX releases so far. We are especially happy about the large number of people who contributed for the first time to HPX.

  • We completed the transition from the older (non-conforming) implementation of hpx::future to the new and fully conforming version by removing the old code and by renaming the type hpx::unique_future to hpx::future. In order to maintain backwards compatibility with existing code which uses the type hpx::unique_future we support the configuration variable HPX_UNIQUE_FUTURE_ALIAS. If this variable is set to ON while running cmake it will additionally define a template alias for this type.
  • We rewrote and significantly changed our build system. Please have a look at the new (now generated) documentation here: HPX build system. Please revisit your build scripts to adapt to the changes. The most notable changes are:
    • HPX_NO_INSTALL is no longer necessary.
    • For external builds, you need to set HPX_DIR instead of HPX_ROOT as described here: Using CMake.
    • IDEs that support multiple configurations (Visual Studio and XCode) can now be used as intended. that means no build dir.
    • Building HPX statically (without dynamic libraries) is now supported (-DHPX_STATIC_LINKING=On).
    • Please note that many variables used to configure the build process have been renamed to unify the naming conventions (see the section CMake Variables used to configure HPX for more information).
    • This also fixes a long list of issues, for more information see IS#1204.
  • We started to implement various proposals to the C++ Standardization committee related to parallelism and concurrency, most notably N4409 (Working Draft, Technical Specification for C++ Extensions for Parallelism), N4411 (Task Region Rev. 3), and N4313 (Working Draft, Technical Specification for C++ Extensions for Concurrency).
  • We completely remodeled our automatic build system to run builds and unit tests on various systems and compilers. This allows us to find most bugs right as they were introduced and helps to maintain a high level of quality and compatibility. The newest build logs can be found at HPX Buildbot Website.
Bug Fixes (Closed Tickets)

Here is a list of the important tickets we closed for this release.

  • IS#1296 - Rename make_error_future to make_exceptional_future, adjust to N4123
  • IS#1295 - building issue
  • IS#1293 - Transpose example
  • IS#1292 - Wrong abs() function used in example
  • IS#1291 - non-syncronized shift operators have been removed
  • IS#1290 - RDTSCP is defined as true for Xeon Phi build
  • IS#1289 - Fixing 1288
  • IS#1288 - Add new performance counters
  • IS#1287 - Hierarchy scheduler broken performance counters
  • IS#1286 - Algorithm cleanup
  • IS#1285 - Broken Links in Documentation
  • IS#1284 - Uninitialized copy
  • IS#1283 - missing boost::scoped_ptr includes
  • IS#1282 - Update documentation of build options for schedulers
  • IS#1281 - reset idle rate counter
  • IS#1280 - Bug when executing on Intel MIC
  • IS#1279 - Add improved when_all/wait_all
  • IS#1278 - Implement improved when_all/wait_all
  • IS#1277 - feature request: get access to argc argv and variables_map
  • IS#1276 - Remove merging map
  • IS#1274 - Weird (wrong) string code in papi.cpp
  • IS#1273 - Sequential task execution policy
  • IS#1272 - Avoid CMake name clash for Boost.Thread library
  • IS#1271 - Updates on HPX Test Units
  • IS#1270 - hpx/util/safe_lexical_cast.hpp is added
  • IS#1269 - Added default value for "LIB" cmake variable
  • IS#1268 - Memory Counters not working
  • IS#1266 - FindHPX.cmake is not installed
  • IS#1263 - apply_remote test takes too long
  • IS#1262 - Chrono cleanup
  • IS#1261 - Need make install for papi counters and this builds all the examples
  • IS#1260 - Documentation of Stencil example claims
  • IS#1259 - Avoid double-linking Boost on Windows
  • IS#1257 - Adding additional parameter to create_thread
  • IS#1256 - added buildbot changes to release notes
  • IS#1255 - Cannot build MiniGhost
  • IS#1253 - hpx::thread defects
  • IS#1252 - HPX_PREFIX is too fragile
  • IS#1250 - switch_to_fiber_emulation does not work properly
  • IS#1249 - Documentation is generated under Release folder
  • IS#1248 - Fix usage of hpx_generic_coroutine_context and get tests passing on powerpc
  • IS#1247 - Dynamic linking error
  • IS#1246 - Make cpuid.cpp C++11 compliant
  • IS#1245 - HPX fails on startup (setting thread affinity mask)
  • IS#1244 - HPX_WITH_RDTSC configure test fails, but should succeed
  • IS#1243 - CTest dashboard info for CSCS CDash drop location
  • IS#1242 - Mac fixes
  • IS#1241 - Failure in Distributed with Boost 1.56
  • IS#1240 - fix a race condition in examples.diskperf
  • IS#1239 - fix wait_each in examples.diskperf
  • IS#1238 - Fixed #1237: hpx::util::portable_binary_iarchive failed
  • IS#1237 - hpx::util::portable_binary_iarchive faileds
  • IS#1235 - Fixing clang warnings and errors
  • IS#1234 - TCP runs fail: Transport endpoint is not connected
  • IS#1233 - Making sure the correct number of threads is registered with AGAS
  • IS#1232 - Fixing race in wait_xxx
  • IS#1231 - Parallel minmax
  • IS#1230 - Distributed run of 1d_stencil_8 uses less threads than spec. & sometimes gives errors
  • IS#1229 - Unstable number of threads
  • IS#1228 - HPX link error (cmake / MPI)
  • IS#1226 - Warning about struct/class thread_counters
  • IS#1225 - Adding parallel::replace etc
  • IS#1224 - Extending dataflow to pass through non-future arguments
  • IS#1223 - Remaining find algorithms implemented, N4071
  • IS#1222 - Merging all the changes
  • IS#1221 - No error output when using mpirun with hpx
  • IS#1219 - Adding new AGAS cache performance counters
  • IS#1216 - Fixing using futures (clients) as arguments to actions
  • IS#1215 - Error compiling simple component
  • IS#1214 - Stencil docs
  • IS#1213 - Using more than a few dozen MPI processes on SuperMike results in a seg fault before getting to hpx_main
  • IS#1212 - Parallel rotate
  • IS#1211 - Direct actions cause the future's shared_state to be leaked
  • IS#1210 - Refactored local::promise to be standard conformant
  • IS#1209 - Improve command line handling
  • IS#1208 - Adding parallel::reverse and parallel::reverse_copy
  • IS#1207 - Add copy_backward and move_backward
  • IS#1206 - N4071 additional algorithms implemented
  • IS#1204 - Cmake simplification and various other minor changes
  • IS#1203 - Implementing new launch policy for (local) async: hpx::launch::fork.
  • IS#1202 - Failed assertion in connection_cache.hpp
  • IS#1201 - pkg-config doesn't add mpi link directories
  • IS#1200 - Error when querying time performance counters
  • IS#1199 - library path is now configurable (again)
  • IS#1198 - Error when querying performance counters
  • IS#1197 - tests fail with intel compiler
  • IS#1196 - Silence several warnings
  • IS#1195 - Rephrase initializers to work with VC++ 2012
  • IS#1194 - Simplify parallel algorithms
  • IS#1193 - Adding parallel::equal
  • IS#1192 - HPX(out_of_memory) on including <hpx/hpx.hpp>
  • IS#1191 - Fixing #1189
  • IS#1190 - Chrono cleanup
  • IS#1189 - Deadlock .. somewhere? (probably serialization)
  • IS#1188 - Removed future::get_status()
  • IS#1186 - Fixed FindOpenCL to find current AMD APP SDK
  • IS#1184 - Tweaking future unwrapping
  • IS#1183 - Extended parallel::reduce
  • IS#1182 - future::unwrap hangs for launch::deferred
  • IS#1181 - Adding all_of, any_of, and none_of and corresponding documentation
  • IS#1180 - hpx::cout defect
  • IS#1179 - hpx::async does not work for member function pointers when called on types with self-defined unary operator*
  • IS#1178 - Implemented variadic hpx::util::zip_iterator
  • IS#1177 - MPI parcelport defect
  • IS#1176 - HPX_DEFINE_COMPONENT_CONST_ACTION_TPL does not have a 2-argument version
  • IS#1175 - Create util::zip_iterator working with util::tuple<>
  • IS#1174 - Error Building HPX on linux, root_certificate_authority.cpp
  • IS#1173 - hpx::cout output lost
  • IS#1172 - HPX build error with Clang 3.4.2
  • IS#1171 - CMAKE_INSTALL_PREFIX ignored
  • IS#1170 - Close hpx_benchmarks repository on Github
  • IS#1169 - Buildbot emails have syntax error in url
  • IS#1167 - Merge partial implementation of standards proposal N3960
  • IS#1166 - Fixed several compiler warnings
  • IS#1165 - cmake warns: "tests.regressions.actions" does not exist
  • IS#1164 - Want my own serialization of hpx::future
  • IS#1162 - Segfault in hello_world example
  • IS#1161 - Use HPX_ASSERT to aid the compiler
  • IS#1160 - Do not put -DNDEBUG into hpx_application.pc
  • IS#1159 - Support Clang 3.4.2
  • IS#1158 - Fixed #1157: Rename when_n/wait_n, add when_xxx_n/wait_xxx_n
  • IS#1157 - Rename when_n/wait_n, add when_xxx_n/wait_xxx_n
  • IS#1156 - Force inlining fails
  • IS#1155 - changed header of printout to be compatible with python csv module
  • IS#1154 - Fixing iostreams
  • IS#1153 - Standard manipulators (like std::endl) do not work with hpx::ostream
  • IS#1152 - Functions revamp
  • IS#1151 - Supressing cmake 3.0 policy warning for CMP0026
  • IS#1150 - Client Serialization error
  • IS#1149 - Segfault on Stampede
  • IS#1148 - Refactoring mini-ghost
  • IS#1147 - N3960 copy_if and copy_n implemented and tested
  • IS#1146 - Stencil print
  • IS#1145 - N3960 hpx::parallel::copy implemented and tested
  • IS#1144 - OpenMP examples 1d_stencil do not build
  • IS#1143 - 1d_stencil OpenMP examples do not build
  • IS#1142 - Cannot build HPX with gcc 4.6 on OS X
  • IS#1140 - Fix OpenMP lookup, enable usage of config tests in external CMake projects.
  • IS#1139 - hpx/hpx/config/compiler_specific.hpp
  • IS#1138 - clean up pkg-config files
  • IS#1137 - Improvements to create binary packages
  • IS#1136 - HPX_GCC_VERSION not defined on all compilers
  • IS#1135 - Avoiding collision between winsock2.h and windows.h
  • IS#1134 - Making sure, that hpx::finalize can be called from any locality
  • IS#1133 - 1d stencil examples
  • IS#1131 - Refactor unique_function implementation
  • IS#1130 - Unique function
  • IS#1129 - Some fixes to the Build system on OS X
  • IS#1128 - Action future args
  • IS#1127 - Executor causes segmentation fault
  • IS#1124 - Adding new API functions: register_id_with_basename, unregister_id_with_basename, find_ids_from_basename; adding test
  • IS#1123 - Reduce nesting of try-catch construct in encode_parcels?
  • IS#1122 - Client base fixes
  • IS#1121 - Update hpxrun.py.in
  • IS#1120 - HTTS2 tests compile errors on v110 (VS2012)
  • IS#1119 - Remove references to boost::atomic in accumulator example
  • IS#1118 - Only build test thread_pool_executor_1114_test if HPX_LOCAL_SCHEDULER is set
  • IS#1117 - local_queue_executor linker error on vc110
  • IS#1116 - Disabled performance counter should give runtime errors, not invalid data
  • IS#1115 - Compile error with Intel C++ 13.1
  • IS#1114 - Default constructed executor is not usable
  • IS#1113 - Fast compilation of logging causes ABI incompatibilities between different NDEBUG values
  • IS#1112 - Using thread_pool_executors causes segfault
  • IS#1111 - hpx::threads::get_thread_data always returns zero
  • IS#1110 - Remove unnecessary null pointer checks
  • IS#1109 - More tests adjustments
  • IS#1108 - Clarify build rules for "libboost_atomic-mt.so"?
  • IS#1107 - Remove unnecessary null pointer checks
  • IS#1106 - network_storage benchmark imporvements, adding legends to plots and tidying layout
  • IS#1105 - Add more plot outputs and improve instructions doc
  • IS#1104 - Complete quoting for parameters of some CMake commands
  • IS#1103 - Work on test/scripts
  • IS#1102 - Changed minimum requirement of window install to 2012
  • IS#1101 - Changed minimum requirement of window install to 2012
  • IS#1100 - Changed readme to no longer specify using MSVC 2010 compiler
  • IS#1099 - Error returning futures from component actions
  • IS#1098 - Improve storage test
  • IS#1097 - data_actions quickstart example calls missing function decorate_action of data_get_action
  • IS#1096 - MPI parcelport broken with new zero copy optimization
  • IS#1095 - Warning C4005: _WIN32_WINNT: Macro redefinition
  • IS#1094 - Syntax error for -DHPX_UNIQUE_FUTURE_ALIAS in master
  • IS#1093 - Syntax error for -DHPX_UNIQUE_FUTURE_ALIAS
  • IS#1092 - Rename unique_future<> back to future<>
  • IS#1091 - Inconsistent error message
  • IS#1090 - On windows 8.1 the examples crashed if using more than one os thread
  • IS#1089 - Components should be allowed to have their own executor
  • IS#1088 - Add possibility to select a network interface for the ibverbs parcelport
  • IS#1087 - ibverbs and ipc parcelport uses zero copy optimization
  • IS#1083 - Make shell examples copyable in docs
  • IS#1082 - Implement proper termination detection during shutdown
  • IS#1081 - Implement thread_specific_ptr for hpx::threads
  • IS#1072 - make install not working properly
  • IS#1070 - Complete quoting for parameters of some CMake commands
  • IS#1059 - Fix more unused variable warnings
  • IS#1051 - Implement when_each
  • IS#973 - Would like option to report hwloc bindings
  • IS#970 - Bad flags for Fortran compiler
  • IS#941 - Create a proper user level context switching class for BG/Q
  • IS#935 - Build error with gcc 4.6 and Boost 1.54.0 on hpx trunk and 0.9.6
  • IS#934 - Want to build HPX without dynamic libraries
  • IS#927 - Make hpx/lcos/reduce.hpp accept futures of id_type
  • IS#926 - All unit tests that are run with more than one thread with CTest/hpx_run_test should configure hpx.os_threads
  • IS#925 - regression_dataflow_791 needs to be brought in line with HPX standards
  • IS#899 - Fix race conditions in regression tests
  • IS#879 - Hung test leads to cascading test failure; make tests should support the MPI parcelport
  • IS#865 - future<T> and friends shall work for movable only Ts
  • IS#847 - Dynamic libraries are not installed on OS X
  • IS#816 - First Program tutorial pull request
  • IS#799 - Wrap lexical_cast to avoid exceptions
  • IS#720 - broken configuration when using ccmake on Ubuntu
  • IS#622 - --hpx:hpx and --hpx:debug-hpx-log is nonsensical
  • IS#525 - Extend barrier LCO test to run in distributed
  • IS#515 - Multi-destination version of hpx::apply is broken
  • IS#509 - Push Boost.Atomic changes upstream
  • IS#503 - Running HPX applications on Windows should not require setting %PATH%
  • IS#461 - Add a compilation sanity test
  • IS#456 - hpx_run_tests.py should log output from tests that timeout
  • IS#454 - Investigate threadmanager performance
  • IS#345 - Add more versatile environmental/cmake variable support to hpx_find_* CMake macros
  • IS#209 - Support multiple configurations in generated build files
  • IS#190 - hpx::cout should be a std::ostream
  • IS#189 - iostreams component should use startup/shutdown functions
  • IS#183 - Use Boost.ICL for correctness in AGAS
  • IS#44 - Implement real futures

We have had over 800 commits since the last release and we have closed over 65 tickets (bugs, feature requests, etc.).

With the changes below, HPX is once again leading the charge of a whole new era of computation. By intrinsically breaking down and synchronizing the work to be done, HPX insures that application developers will no longer have to fret about where a segment of code executes. That allows coders to focus their time and energy to understanding the data dependencies of their algorithms and thereby the core obstacles to an efficient code. Here are some of the advantages of using HPX:

  • HPX is solidly rooted in a sophisticated theoretical execution model -- ParalleX
  • HPX exposes an API fully conforming to the C++11 and the draft C++14 standards, extended and applied to distributed computing. Everything programmers know about the concurrency primitives of the standard C++ library is still valid in the context of HPX.
  • It provides a competitive, high performance implementation of modern, future-proof ideas which gives an smooth migration path from todays mainstream techniques
  • There is no need for the programmer to worry about lower level parallelization paradigms like threads or message passing; no need to understand pthreads, MPI, OpenMP, or Windows threads, etc.
  • There is no need to think about different types of parallelism such as tasks, pipelines, or fork-join, task or data parallelism.
  • The same source of your program compiles and runs on Linux, BlueGene/Q, Mac OS X, Windows, and Android.
  • The same code runs on shared memory multi-core systems and supercomputers, on handheld devices and Intel® Xeon Phi™ accelerators, or a heterogeneous mix of those.
General Changes
  • A major API breaking change for this release was introduced by implementing hpx::future and hpx::shared_future fully in conformance with the C++11 Standard. While hpx::shared_future is new and will not create any compatibility problems, we revised the interface and implementation of the existing hpx::future. For more details please see the mailing list archive. To avoid any incompatibilities for existing code we named the type which implements the std::future interface as hpx::unique_future. For the next release this will be renamed to hpx::future, making it full conforming to C++11 Standard.
  • A large part of the code base of HPX has been refactored and partially re-implemented. The main changes were related to
    • The threading subsystem: these changes significantly reduce the amount of overheads caused by the schedulers, improve the modularity of the code base, and extend the variety of available scheduling algorithms.
    • The parcel subsystem: these changes improve the performance of the HPX networking layer, modularize the structure of the parcelports, and simplify the creation of new parcelports for other underlying networking libraries.
    • The API subsystem: these changes improved the conformance of the API to C++11 Standard, extend and unify the available API functionality, and decrease the overheads created by various elements of the API.
    • The robustness of the component loading subsystem has been improved significantly, allowing to more portably and more reliably register the components needed by an application as startup. This additionally speeds up general application initialization.
  • We added new API functionality like hpx::migrate and hpx::copy_component which are the basic building blocks necessary for implementing higher level abstractions for system-wide load balancing, runtime-adaptive resource management, and object-oriented checkpointing and state-management.
  • We removed the use of C++11 move emulation (using Boost.Move), replacing it with C++11 rvalue references. This is the first step towards using more and more native C++11 facilities which we plan to introduce in the future.
  • We improved the reference counting scheme used by HPX which helps managing distributed objects and memory. This improves the overall stability of HPX and further simplifies writing real world applications.
  • The minimal Boost version required to use HPX is now V1.49.0.
  • This release coincides with the first release of HPXPI (V0.1.0), the first implementation of the XPI specification.
Bug Fixes (Closed Tickets)

Here is a list of the important tickets we closed for this release.

  • IS#1086 - Expose internal boost::shared_array to allow user management of array lifetime
  • IS#1083 - Make shell examples copyable in docs
  • IS#1080 - /threads{locality#*/total}/count/cumulative broken
  • IS#1079 - Build problems on OS X
  • IS#1078 - Improve robustness of component loading
  • IS#1077 - Fix a missing enum definition for 'take' mode
  • IS#1076 - Merge Jb master
  • IS#1075 - Unknown CMake command "add_hpx_pseudo_target"
  • IS#1074 - Implement apply_continue_callback and apply_colocated_callback
  • IS#1073 - The new apply_colocated and async_colocated functions lead to automatic registered functions
  • IS#1071 - Remove deferred_packaged_task
  • IS#1069 - serialize_buffer with allocator fails at destruction
  • IS#1068 - Coroutine include and forward declarations missing
  • IS#1067 - Add allocator support to util::serialize_buffer
  • IS#1066 - Allow for MPI_Init being called before HPX launches
  • IS#1065 - AGAS cache isn't used/populated on worker localities
  • IS#1064 - Reorder includes to ensure ws2 includes early
  • IS#1063 - Add hpx::runtime::suspend and hpx::runtime::resume
  • IS#1062 - Fix async_continue to propery handle return types
  • IS#1061 - Implement async_colocated and apply_colocated
  • IS#1060 - Implement minimal component migration
  • IS#1058 - Remove HPX_UTIL_TUPLE from code base
  • IS#1057 - Add performance counters for threading subsystem
  • IS#1055 - Thread allocation uses two memory pools
  • IS#1053 - Work stealing flawed
  • IS#1052 - Fix a number of warnings
  • IS#1049 - Fixes for TLS on OSX and more reliable test running
  • IS#1048 - Fixing after 588 hang
  • IS#1047 - Use port '0' for networking when using one locality
  • IS#1046 - composable_guard test is broken when having more than one thread
  • IS#1045 - Security missing headers
  • IS#1044 - Native TLS on FreeBSD via __thread
  • IS#1043 - async et.al. compute the wrong result type
  • IS#1042 - async et.al. implicitly unwrap reference_wrappers
  • IS#1041 - Remove redundant costly Kleene stars from regex searches
  • IS#1040 - CMake script regex match patterns has unnecessary kleenes
  • IS#1039 - Remove use of Boost.Move and replace with std::move and real rvalue refs
  • IS#1038 - Bump minimal required Boost to 1.49.0
  • IS#1037 - Implicit unwrapping of futures in async broken
  • IS#1036 - Scheduler hangs when user code attempts to "block" OS-threads
  • IS#1035 - Idle-rate counter always reports 100% idle rate
  • IS#1034 - Symbolic name registration causes application hangs
  • IS#1033 - Application options read in from an options file generate an error message
  • IS#1032 - hpx::id_type local reference counting is wrong
  • IS#1031 - Negative entry in reference count table
  • IS#1030 - Implement condition_variable
  • IS#1029 - Deadlock in thread scheduling subsystem
  • IS#1028 - HPX-thread cumulative count performance counters report incorrect value
  • IS#1027 - Expose hpx::thread_interrupted error code as a separate exception type
  • IS#1026 - Exceptions thrown in asynchronous calls can be lost if the value of the future is never queried
  • IS#1025 - future::wait_for/wait_until do not remove callback
  • IS#1024 - Remove dependence to boost assert and create hpx assert
  • IS#1023 - Segfaults with tcmalloc
  • IS#1022 - prerequisites link in readme is broken
  • IS#1020 - HPX Deadlock on external synchronization
  • IS#1019 - Convert using BOOST_ASSERT to HPX_ASSERT
  • IS#1018 - compiling bug with gcc 4.8.1
  • IS#1017 - Possible crash in io_pool executor
  • IS#1016 - Crash at startup
  • IS#1014 - Implement Increment/Decrement Merging
  • IS#1013 - Add more logging channels to enable greater control over logging granularity
  • IS#1012 - --hpx:debug-hpx-log and --hpx:debug-agas-log lead to non-thread safe writes
  • IS#1011 - After installation, running applications from the build/staging directory no longer works
  • IS#1010 - Mergable decrement requests are not being merged
  • IS#1009 - --hpx:list-symbolic-names crashes
  • IS#1007 - Components are not properly destroyed
  • IS#1006 - Segfault/hang in set_data
  • IS#1003 - Performance counter naming issue
  • IS#982 - Race condition during startup
  • IS#912 - OS X: component type not found in map
  • IS#663 - Create a buildbot slave based on Clang 3.2/OSX
  • IS#636 - Expose this_locality::apply<act>(p1, p2); for local execution
  • IS#197 - Add --console=address option for PBS runs
  • IS#175 - Asynchronous AGAS API

We have had over 1000 commits since the last release and we have closed over 180 tickets (bugs, feature requests, etc.).

General Changes
  • Ported HPX to BlueGene/Q
  • Improved HPX support for Xeon/Phi accelerators
  • Reimplemented hpx::bind, hpx::tuple, and hpx::function for better performance and better compliance with the C++11 Standard. Added hpx::mem_fn.
  • Reworked hpx::when_all and hpx::when_any for better compliance with the ongoing C++ standardization effort, added heterogeneous version for those functions. Added hpx::when_any_swapped.
  • Added hpx::copy as a precursor for a migrate functionality
  • Added hpx::get_ptr allowing to directly access the memory underlying a given component
  • Added the hpx::lcos::broadcast, hpx::lcos::reduce, and hpx::lcos::fold collective operations
  • Added hpx::get_locality_name allowing to retrieve the name of any of the localities for the application.
  • Added support for more flexible thread affinity control from the HPX command line, such as new modes for --hpx:bind (balanced, scattered, compact), improved default settings when running multiple localities on the same node.
  • Added experimental executors for simpler thread pooling and scheduling. This API may change in the future as it will stay aligned with the ongoing C++ standardization efforts.
  • Massively improved the performance of the HPX serialization code. Added partial support for zero copy serialization of array and bitwise-copyable types.
  • General performance improvements of the code related to threads and futures.
Bug Fixes (Closed Tickets)

Here is a list of the important tickets we closed for this release.

  • IS#1005 - Allow to disable array optimizations and zero copy optimizations for each parcelport
  • IS#1004 - Generate new HPX logo image for the docs
  • IS#1002 - If MPI parcelport is not available, running HPX under mpirun should fail
  • IS#1001 - Zero copy serialization raises assert
  • IS#1000 - Can't connect to a HPX application running with the MPI parcelport from a non MPI parcelport locality
  • IS#999 - Optimize hpx::when_n
  • IS#998 - Fixed const-correctness
  • IS#997 - Making serialize_buffer::data() type save
  • IS#996 - Memory leak in hpx::lcos::promise
  • IS#995 - Race while registering pre-shutdown functions
  • IS#994 - thread_rescheduling regression test does not compile
  • IS#992 - Correct comments and messages
  • IS#991 - setcap cap_sys_rawio=ep for power profiling causes an HPX application to abort
  • IS#989 - Jacobi hangs during execution
  • IS#988 - multiple_init test is failing
  • IS#986 - Can't call a function called "init" from "main" when using <hpx/hpx_main.hpp>
  • IS#984 - Reference counting tests are failing
  • IS#983 - thread_suspension_executor test fails
  • IS#980 - Terminating HPX threads don't leave stack in virgin state
  • IS#979 - Static scheduler not in documents
  • IS#978 - Preprocessing limits are broken
  • IS#977 - Make tests.regressions.lcos.future_hang_on_get shorter
  • IS#976 - Wrong library order in pkgconfig
  • IS#975 - Please reopen #963
  • IS#974 - Option pu-offset ignored in fixing_588 branch
  • IS#972 - Cannot use MKL with HPX
  • IS#969 - Non-existent INI files requested on the command line via --hpx:config do not cause warnings or errors.
  • IS#968 - Cannot build examples in fixing_588 branch
  • IS#967 - Command line description of --hpx:queuing seems wrong
  • IS#966 - --hpx:print-bind physical core numbers are wrong
  • IS#965 - Deadlock when building in Release mode
  • IS#963 - Not all worker threads are working
  • IS#962 - Problem with SLURM integration
  • IS#961 - --hpx:print-bind outputs incorrect information
  • IS#960 - Fix cut and paste error in documentation of get_thread_priority
  • IS#959 - Change link to boost.atomic in documentation to point to boost.org
  • IS#958 - Undefined reference to intrusive_ptr_release
  • IS#957 - Make tuple standard compliant
  • IS#956 - Segfault with a3382fb
  • IS#955 - --hpx:nodes and --hpx:nodefiles do not work with foreign nodes
  • IS#954 - Make order of arguments for hpx::async and hpx::broadcast consistent
  • IS#953 - Cannot use MKL with HPX
  • IS#952 - register_[pre_]shutdown_function never throw
  • IS#951 - Assert when number of threads is greater than hardware concurrency
  • IS#948 - HPX_HAVE_GENERIC_CONTEXT_COROUTINES conflicts with HPX_HAVE_FIBER_BASED_COROUTINES
  • IS#947 - Need MPI_THREAD_MULTIPLE for backward compatibility
  • IS#946 - HPX does not call MPI_Finalize
  • IS#945 - Segfault with hpx::lcos::broadcast
  • IS#944 - OS X: assertion 'pu_offset_ < hardware_concurrency' failed
  • IS#943 - #include <hpx/hpx_main.hpp> does not work
  • IS#942 - Make the BG/Q work with -O3
  • IS#940 - Use separator when concatenating locality name
  • IS#939 - Refactor MPI parcelport to use MPI_Wait instead of multiple MPI_Test calls
  • IS#938 - Want to officially access client_base::gid_
  • IS#937 - client_base::gid_ should be private
  • IS#936 - Want doxygen-like source code index
  • IS#935 - Build error with gcc 4.6 and Boost 1.54.0 on hpx trunk and 0.9.6
  • IS#933 - Cannot build HPX with Boost 1.54.0
  • IS#932 - Components are destructed too early
  • IS#931 - Make HPX work on BG/Q
  • IS#930 - make git-docs is broken
  • IS#929 - Generating index in docs broken
  • IS#928 - Optimize hpx::util::static_ for C++11 compilers supporting magic statics
  • IS#924 - Make kill_process_tree (in process.py) more robust on Mac OSX
  • IS#923 - Correct BLAS and RNPL cmake tests
  • IS#922 - Cannot link against BLAS
  • IS#921 - Implement hpx::mem_fn
  • IS#920 - Output locality with --hpx:print-bind
  • IS#919 - Correct grammar; simplify boolean expressions
  • IS#918 - Link to hello_world.cpp is broken
  • IS#917 - adapt cmake file to new boostbook version
  • IS#916 - fix problem building documentation with xsltproc >= 1.1.27
  • IS#915 - Add another TBBMalloc library search path
  • IS#914 - Build problem with Intel compiler on Stampede (TACC)
  • IS#913 - fix error messages in fibonacci examples
  • IS#911 - Update OS X build instructions
  • IS#910 - Want like to specify MPI_ROOT instead of compiler wrapper script
  • IS#909 - Warning about void* arithmetic
  • IS#908 - Buildbot for MIC is broken
  • IS#906 - Can't use --hpx:bind=balanced with multiple MPI processes
  • IS#905 - --hpx:bind documentation should describe full grammar
  • IS#904 - Add hpx::lcos::fold and hpx::lcos::inverse_fold collective operation
  • IS#903 - Add hpx::when_any_swapped()
  • IS#902 - Add hpx::lcos::reduce collective operation
  • IS#901 - Web documentation is not searchable
  • IS#900 - Web documentation for trunk has no index
  • IS#898 - Some tests fail with GCC 4.8.1 and MPI parcel port
  • IS#897 - HWLOC causes failures on Mac
  • IS#896 - pu-offset leads to startup error
  • IS#895 - hpx::get_locality_name not defined
  • IS#894 - Race condition at shutdown
  • IS#893 - --hpx:print-bind switches std::cout to hexadecimal mode
  • IS#892 - hwloc_topology_load can be expensive -- don't call multiple times
  • IS#891 - The documentation for get_locality_name is wrong
  • IS#890 - --hpx:print-bind should not exit
  • IS#889 - --hpx:debug-hpx-log=FILE does not work
  • IS#888 - MPI parcelport does not exit cleanly for --hpx:print-bind
  • IS#887 - Choose thread affinities more cleverly
  • IS#886 - Logging documentation is confusing
  • IS#885 - Two threads are slower than one
  • IS#884 - is_callable failing with member pointers in C++11
  • IS#883 - Need help with is_callable_test
  • IS#882 - tests.regressions.lcos.future_hang_on_get does not terminate
  • IS#881 - tests/regressions/block_matrix/matrix.hh won't compile with GCC 4.8.1
  • IS#880 - HPX does not work on OS X
  • IS#878 - future::unwrap triggers assertion
  • IS#877 - "make tests" has build errors on Ubuntu 12.10
  • IS#876 - tcmalloc is used by default, even if it is not present
  • IS#875 - global_fixture is defined in a header file
  • IS#874 - Some tests take very long
  • IS#873 - Add block-matrix code as regression test
  • IS#872 - HPX documentation does not say how to run tests with detailed output
  • IS#871 - All tests fail with "make test"
  • IS#870 - Please explicitly disable serialization in classes that don't support it
  • IS#868 - boost_any test failing
  • IS#867 - Reduce the number of copies of hpx::function arguments
  • IS#863 - Futures should not require a default constructor
  • IS#862 - value_or_error shall not default construct its result
  • IS#861 - HPX_UNUSED macro
  • IS#860 - Add functionality to copy construct a component
  • IS#859 - hpx::endl should flush
  • IS#858 - Create hpx::get_ptr<> allowing to access component implementation
  • IS#855 - Implement hpx::INVOKE
  • IS#854 - hpx/hpx.hpp does not include hpx/include/iostreams.hpp
  • IS#853 - Feature request: null future
  • IS#852 - Feature request: Locality names
  • IS#851 - hpx::cout output does not appear on screen
  • IS#849 - All tests fail on OS X after installing
  • IS#848 - Update OS X build instructions
  • IS#846 - Update hpx_external_example
  • IS#845 - Issues with having both debug and release modules in the same directory
  • IS#844 - Create configuration header
  • IS#843 - Tests should use CTest
  • IS#842 - Remove buffer_pool from MPI parcelport
  • IS#841 - Add possibility to broadcast an index with hpx::lcos::broadcast
  • IS#838 - Simplify util::tuple
  • IS#837 - Adopt boost::tuple tests for util::tuple
  • IS#836 - Adopt boost::function tests for util::function
  • IS#835 - Tuple interface missing pieces
  • IS#833 - Partially preprocessing files not working
  • IS#832 - Native papi counters do not work with wild cards
  • IS#831 - Arithmetics counter fails if only one parameter is given
  • IS#830 - Convert hpx::util::function to use new scheme for serializing its base pointer
  • IS#829 - Consistently use decay<T> instead of remove_const< remove_reference<T>>
  • IS#828 - Update future implementation to N3721 and N3722
  • IS#827 - Enable MPI parcelport for bootstrapping whenever application was started using mpirun
  • IS#826 - Support command line option --hpx:print-bind even if --hpx::bind was not used
  • IS#825 - Memory counters give segfault when attempting to use thread wild cards or numbers only total works
  • IS#824 - Enable lambda functions to be used with hpx::async/hpx::apply
  • IS#823 - Using a hashing filter
  • IS#822 - Silence unused variable warning
  • IS#821 - Detect if a function object is callable with given arguments
  • IS#820 - Allow wildcards to be used for performance counter names
  • IS#819 - Make the AGAS symbolic name registry distributed
  • IS#818 - Add future::then() overload taking an executor
  • IS#817 - Fixed typo
  • IS#815 - Create an lco that is performing an efficient broadcast of actions
  • IS#814 - Papi counters cannot specify thread#* to get the counts for all threads
  • IS#813 - Scoped unlock
  • IS#811 - simple_central_tuplespace_client run error
  • IS#810 - ostream error when << any objects
  • IS#809 - Optimize parcel serialization
  • IS#808 - HPX applications throw exception when executed from the build directory
  • IS#807 - Create performance counters exposing overall AGAS statistics
  • IS#795 - Create timed make_ready_future
  • IS#794 - Create heterogeneous when_all/when_any/etc.
  • IS#721 - Make HPX usable for Xeon Phi
  • IS#694 - CMake should complain if you attempt to build an example without its dependencies
  • IS#692 - SLURM support broken
  • IS#683 - python/hpx/process.py imports epoll on all platforms
  • IS#619 - Automate the doc building process
  • IS#600 - GTC performance broken
  • IS#577 - Allow for zero copy serialization/networking
  • IS#551 - Change executable names to have debug postfix in Debug builds
  • IS#544 - Write a custom .lib file on Windows pulling in hpx_init and hpx.dll, phase out hpx_init
  • IS#534 - hpx::init should take functions by std::function and should accept all forms of hpx_main
  • IS#508 - FindPackage fails to set FOO_LIBRARY_DIR
  • IS#506 - Add cmake support to generate ini files for external applications
  • IS#470 - Changing build-type after configure does not update boost library names
  • IS#453 - Document hpx_run_tests.py
  • IS#445 - Significant performance mismatch between MPI and HPX in SMP for allgather example
  • IS#443 - Make docs viewable from build directory
  • IS#421 - Support multiple HPX instances per node in a batch environment like PBS or SLURM
  • IS#316 - Add message size limitation
  • IS#249 - Clean up locking code in big boot barrier
  • IS#136 - Persistent CMake variables need to be marked as cache variables

We have had over 1200 commits since the last release and we have closed roughly 140 tickets (bugs, feature requests, etc.).

General Changes

The major new fetures in this release are:

  • We further consolidated the API exposed by HPX. We aligned our APIs as much as possible with the existing C++11 Standard and related proposals to the C++ standardization committee (such as N3632 and N3857).
  • We implemented a first version of a distributed AGAS service which essentially eliminates all explicit AGAS network traffic.
  • We created a native ibverbs parcelport allowing to take advantage of the superior latency and bandwidth characteristics of Infiniband networks.
  • We successfully ported HPX to the Xeon Phi platform.
  • Support for the SLURM scheduling system was implemented.
  • Major efforts have been dedicated to improving the performance counter framework, numerous new counters were implemented and new APIs were added.
  • We added a modular parcel compression system allowing to improve bandwidth utilization (by reducing the overall size of the tranferred data).
  • We added a modular parcel coalescing system allowing to combine several parcels into larger messages. This reduces latencies introduced by the communication layer.
  • Added an experimental executors API allowing to use different scheduling policies for different parts of the code. This API has been modelled after the Standards proposal N3562. This API is bound to change in the future, though.
  • Added minimal security support for localities which is enforced on the parcelport level. This support is preliminary and experimental and might change in the future.
  • We created a parcelport using low level MPI functions. This is in support of legacy applications which are to be gradually ported and to support platforms where MPI is the only available portable networking layer.
  • We added a preliminary and experimental implementation of a tuple-space object which exposes an interface similar to such systems described in the literature (see for instance The Linda Coordination Language).
Bug Fixes (Closed Tickets)

Here is a list of the important tickets we closed for this release. This is again a very long list of newly implemented features and fixed issues.

  • IS#806 - make (all) in examples folder does nothing
  • IS#805 - Adding the introduction and fixing DOCBOOK dependencies for Windows use
  • IS#804 - Add stackless (non-suspendable) thread type
  • IS#803 - Create proper serialization support functions for util::tuple
  • IS#800 - Add possibility to disable array optimizations during serialization
  • IS#798 - HPX_LIMIT does not work for local dataflow
  • IS#797 - Create a parcelport which uses MPI
  • IS#796 - Problem with Large Numbers of Threads
  • IS#793 - Changing dataflow test case to hang consistently
  • IS#792 - CMake Error
  • IS#791 - Problems with local::dataflow
  • IS#790 - wait_for() doesn't compile
  • IS#789 - HPX with Intel compiler segfaults
  • IS#788 - Intel compiler support
  • IS#787 - Fixed SFINAEd specializations
  • IS#786 - Memory issues during benchmarking.
  • IS#785 - Create an API allowing to register external threads with HPX
  • IS#784 - util::plugin is throwing an error when a symbol is not found
  • IS#783 - How does hpx:bind work?
  • IS#782 - Added quotes around STRING REPLACE potentially empty arguments
  • IS#781 - Make sure no exceptions propagate into the thread manager
  • IS#780 - Allow arithmetics performance counters to expand its parameters
  • IS#779 - Test case for 778
  • IS#778 - Swapping futures segfaults
  • IS#777 - hpx::lcos::details::when_xxx don't restore completion handlers
  • IS#776 - Compiler chokes on dataflow overload with launch policy
  • IS#775 - Runtime error with local dataflow (copying futures?)
  • IS#774 - Using local dataflow without explicit namespace
  • IS#773 - Local dataflow with unwrap: functor operators need to be const
  • IS#772 - Allow (remote) actions to return a future
  • IS#771 - Setting HPX_LIMIT gives huge boost MPL errors
  • IS#770 - Add launch policy to (local) dataflow
  • IS#769 - Make compile time configuration information available
  • IS#768 - Const correctness problem in local dataflow
  • IS#767 - Add launch policies to async
  • IS#766 - Mark data structures for optimized (array based) serialization
  • IS#765 - Align hpx::any with N3508: Any Library Proposal (Revision 2)
  • IS#764 - Align hpx::future with newest N3558: A Standardized Representation of Asynchronous Operations
  • IS#762 - added a human readable output for the ping pong example
  • IS#761 - Ambiguous typename when constructing derived component
  • IS#760 - Simple components can not be derived
  • IS#759 - make install doesn't give a complete install
  • IS#758 - Stack overflow when using locking_hook<>
  • IS#757 - copy paste error; unsupported function overloading
  • IS#756 - GTCX runtime issue in Gordon
  • IS#755 - Papi counters don't work with reset and evaluate API's
  • IS#753 - cmake bugfix and improved component action docs
  • IS#752 - hpx simple component docs
  • IS#750 - Add hpx::util::any
  • IS#749 - Thread phase counter is not reset
  • IS#748 - Memory performance counter are not registered
  • IS#747 - Create performance counters exposing arithmetic operations
  • IS#745 - apply_callback needs to invoke callback when applied locally
  • IS#744 - CMake fixes
  • IS#743 - Problem Building github version of HPX
  • IS#742 - Remove HPX_STD_BIND
  • IS#741 - assertion 'px != 0' failed: HPX(assertion_failure) for low numbers of OS threads
  • IS#739 - Performance counters do not count to the end of the program or evalution
  • IS#738 - Dedicated AGAS server runs don't work; console ignores -a option.
  • IS#737 - Missing bind overloads
  • IS#736 - Performance counter wildcards do not always work
  • IS#735 - Create native ibverbs parcelport based on rdma operations
  • IS#734 - Threads stolen performance counter total is incorrect
  • IS#733 - Test benchmarks need to be checked and fixed
  • IS#732 - Build fails with Mac, using mac ports clang-3.3 on latest git branch
  • IS#731 - Add global start/stop API for performance counters
  • IS#730 - Performance counter values are apparently incorrect
  • IS#729 - Unhandled switch
  • IS#728 - Serialization of hpx::util::function between two localities causes seg faults
  • IS#727 - Memory counters on Mac OS X
  • IS#725 - Restore original thread priority on resume
  • IS#724 - Performance benchmarks do not depend on main HPX libraries
  • IS#723 - --hpx:nodes=cat $PBS_NODEFILE works; --hpx:nodefile=$PBS_NODEFILE does not.
  • IS#722 - Fix binding const member functions as actions
  • IS#719 - Create performance counter exposing compression ratio
  • IS#718 - Add possibility to compress parcel data
  • IS#717 - strip_credit_from_gid has misleading semantics
  • IS#716 - Non-option arguments to programs run using pbsdsh must be before --hpx:nodes, contrary to directions
  • IS#715 - Re-thrown exceptions should retain the original call site
  • IS#714 - failed assertion in debug mode
  • IS#713 - Add performance counters monitoring connection caches
  • IS#712 - Adjust parcel related performance counters to be connection type specific
  • IS#711 - configuration failure
  • IS#710 - Error "timed out while trying to find room in the connection cache" when trying to start multiple localities on a single computer
  • IS#709 - Add new thread state 'staged' referring to task descriptions
  • IS#708 - Detect/mitigate bad non-system installs of GCC on Redhat systems
  • IS#707 - Many examples do not link with Git HEAD version
  • IS#706 - hpx::init removes portions of non-option command line arguments before last = sign
  • IS#705 - Create rolling average and median aggregating performance counters
  • IS#704 - Create performance counter to expose thread queue waiting time
  • IS#703 - Add support to HPX build system to find librcrtool.a and related headers
  • IS#699 - Generalize instrumentation support
  • IS#698 - compilation failure with hwloc absent
  • IS#697 - Performance counter counts should be zero indexed
  • IS#696 - Distributed problem
  • IS#695 - Bad perf counter time printed
  • IS#693 - --help doesn't print component specific command line options
  • IS#692 - SLURM support broken
  • IS#691 - exception while executing any application linked with hwloc
  • IS#690 - thread_id_test and thread_launcher_test failing
  • IS#689 - Make the buildbots use hwloc
  • IS#687 - compilation error fix (hwloc_topology)
  • IS#686 - Linker Error for Applications
  • IS#684 - Pinning of service thread fails when number of worker threads equals the number of cores
  • IS#682 - Add performance counters exposing number of stolen threads
  • IS#681 - Add apply_continue for asynchronous chaining of actions
  • IS#679 - Remove obsolete async_callback API functions
  • IS#678 - Add new API for setting/triggering LCOs
  • IS#677 - Add async_continue for true continuation style actions
  • IS#676 - Buildbot for gcc 4.4 broken
  • IS#675 - Partial preprocessing broken
  • IS#674 - HPX segfaults when built with gcc 4.7
  • IS#673 - use_guard_pages has inconsistent preprocessor guards
  • IS#672 - External build breaks if library path has spaces
  • IS#671 - release tarballs are tarbombs
  • IS#670 - CMake won't find Boost headers in layout=versioned install
  • IS#669 - Links in docs to source files broken if not installed
  • IS#667 - Not reading ini file properly
  • IS#664 - Adapt new meanings of 'const' and 'mutable'
  • IS#661 - Implement BTL Parcel port
  • IS#655 - Make HPX work with the "decltype" result_of
  • IS#647 - documentation for specifying the number of high priority threads --hpx:high-priority-threads
  • IS#643 - Error parsing host file
  • IS#642 - HWLoc issue with TAU
  • IS#639 - Logging potentially suspends a running thread
  • IS#634 - Improve error reporting from parcel layer
  • IS#627 - Add tests for async and apply overloads that accept regular C++ functions
  • IS#626 - hpx/future.hpp header
  • IS#601 - Intel support
  • IS#557 - Remove action codes
  • IS#531 - AGAS request and response classes should use switch statements
  • IS#529 - Investigate the state of hwloc support
  • IS#526 - Make HPX aware of hyper-threading
  • IS#518 - Create facilities allowing to use plain arrays as action arguments
  • IS#473 - hwloc thread binding is broken on CPUs with hyperthreading
  • IS#383 - Change result type detection for hpx::util::bind to use result_of protocol
  • IS#341 - Consolidate route code
  • IS#219 - Only copy arguments into actions once
  • IS#177 - Implement distributed AGAS
  • IS#43 - Support for Darwin (Xcode + Clang)

We have had over 1000 commits since the last release and we have closed roughly 150 tickets (bugs, feature requests, etc.).

General Changes

This release is continuing along the lines of code and API consolidation, and overall usability inprovements. We dedicated much attention to performance and we were able to significantly improve the threading and networking subsystems.

We successfully ported HPX to the Android platform. HPX applications now not only can run on mobile devices, but we support heterogeneous applications running across architecture boundaries. At the Supercomputing Conference 2012 we demonstrated connecting Android tablets to simulations running on a Linux cluster. The Android tablet was used to query performance counters from the Linux simulation and to steer its parameters.

We successfully ported HPX to Mac OSX (using the Clang compiler). Thanks to Pyry Jahkola for contributing the corresponding patches. Please see the section How to Install HPX on Mac OS for more details.

We made a special effort to make HPX usable in highly concurrent use cases. Many of the HPX API functions which possibly take longer than 100 microseconds to execute now can be invoked asynchronously. We added uniform support for composing futures which simplifies to write asynchronous code. HPX actions (function objects encapsulating possibly concurrent remote function invocations) are now well integrated with all other API facilities such like hpx::bind.

All of the API has been aligned as much as possible with established paradigms. HPX now mirrors many of the facilities as defined in the C++11 Standard, such as hpx::thread, hpx::function, hpx::future, etc.

A lot of work has been put into improving the documentation. Many of the API functions are documented now, concepts are explained in detail, and examples are better described than before. The new documentation index enables finding information with lesser effort.

This is the first release of HPX we perform after the move to Github. This step has enabled a wider participation from the community and further encourages us in our decision to release HPX as a true open source library (HPX is licensed under the very liberal Boost Software License).

Bug Fixes (Closed Tickets)

Here is a list of the important tickets we closed for this release. This is by far the longest list of newly implemented features and fixed issues for any of HPX' releases so far.

  • IS#666 - Segfault on calling hpx::finalize twice
  • IS#665 - Adding declaration num_of_cores
  • IS#662 - pkgconfig is building wrong
  • IS#660 - Need uninterrupt function
  • IS#659 - Move our logging library into a different namespace
  • IS#658 - Dynamic performance counter types are broken
  • IS#657 - HPX v0.9.5 (RC1) hello_world example segfaulting
  • IS#656 - Define the affinity of parcel-pool, io-pool, and timer-pool threads
  • IS#654 - Integrate the Boost auto_index tool with documentation
  • IS#653 - Make HPX build on OS X + Clang + libc++
  • IS#651 - Add fine-grained control for thread pinning
  • IS#650 - Command line no error message when using -hpx:(anything)
  • IS#645 - Command line aliases don't work in @file
  • IS#644 - Terminated threads are not always properly cleaned up
  • IS#640 - future_data<T>::set_on_completed_ used without locks
  • IS#638 - hpx build with intel compilers fails on linux
  • IS#637 - --copy-dt-needed-entries breaks with gold
  • IS#635 - Boost V1.53 will add Boost.Lockfree and Boost.Atomic
  • IS#633 - Re-add examples to final 0.9.5 release
  • IS#632 - Example thread_aware_timer is broken
  • IS#631 - FFT application throws error in parcellayer
  • IS#630 - Event synchronization example is broken
  • IS#629 - Waiting on futures hangs
  • IS#628 - Add an HPX_ALWAYS_ASSERT macro
  • IS#625 - Port coroutines context switch benchmark
  • IS#621 - New INI section for stack sizes
  • IS#618 - pkg_config support does not work with a HPX debug build
  • IS#617 - hpx/external/logging/boost/logging/detail/cache_before_init.hpp:139:67: error: 'get_thread_id' was not declared in this scope
  • IS#616 - Change wait_xxx not to use locking
  • IS#615 - Revert visibility 'fix' (fb0b6b8245dad1127b0c25ebafd9386b3945cca9)
  • IS#614 - Fix Dataflow linker error
  • IS#613 - find_here should throw an exception on failure
  • IS#612 - Thread phase doesn't show up in debug mode
  • IS#611 - Make stack guard pages configurable at runtime (initialization time)
  • IS#610 - Co-Locate Components
  • IS#609 - future_overhead
  • IS#608 - --hpx:list-counter-infos problem
  • IS#607 - Update Boost.Context based backend for coroutines
  • IS#606 - 1d_wave_equation is not working
  • IS#605 - Any C++ function that has serializable arguments and a serializable return type should be remotable
  • IS#604 - Connecting localities isn't working anymore
  • IS#603 - Do not verify any ini entries read from a file
  • IS#602 - Rename argument_size to type_size/ added implementation to get parcel size
  • IS#599 - Enable locality specific command line options
  • IS#598 - Need an API that accesses the performance counter reporting the system uptime
  • IS#597 - compiling on ranger
  • IS#595 - I need a place to store data in a thread self pointer
  • IS#594 - 32/64 interoperability
  • IS#593 - Warn if logging is disabled at compile time but requested at runtime
  • IS#592 - Add optional argument value to --hpx:list-counters and --hpx:list-counter-infos
  • IS#591 - Allow for wildcards in performance counter names specified with --hpx:print-counter
  • IS#590 - Local promise semantic differences
  • IS#589 - Create API to query performance counter names
  • IS#587 - Add get_num_localities and get_num_threads to AGAS API
  • IS#586 - Adjust local AGAS cache size based on number of localities
  • IS#585 - Error while using counters in HPX
  • IS#584 - counting argument size of actions, initial pass.
  • IS#581 - Remove RemoteResult template parameter for future<>
  • IS#580 - Add possibility to hook into actions
  • IS#578 - Use angle brackets in HPX error dumps
  • IS#576 - Exception incorrectly thrown when --help is used
  • IS#575 - HPX(bad_component_type) with gcc 4.7.2 and boost 1.51
  • IS#574 - --hpx:connect command line parameter not working correctly
  • IS#571 - hpx::wait() (callback version) should pass the future to the callback function
  • IS#570 - hpx::wait should operate on boost::arrays and std::lists
  • IS#569 - Add a logging sink for Android
  • IS#568 - 2-argument version of HPX_DEFINE_COMPONENT_ACTION
  • IS#567 - Connecting to a running HPX application works only once
  • IS#565 - HPX doesn't shutdown properly
  • IS#564 - Partial preprocessing of new component creation interface
  • IS#563 - Add hpx::start/hpx::stop to avoid blocking main thread
  • IS#562 - All command line arguments swallowed by hpx
  • IS#561 - Boost.Tuple is not move aware
  • IS#558 - boost::shared_ptr<> style semantics/syntax for client classes
  • IS#556 - Creation of partially preprocessed headers should be enabled for Boost newer than V1.50
  • IS#555 - BOOST_FORCEINLINE does not name a type
  • IS#554 - Possible race condition in thread get_id()
  • IS#552 - Move enable client_base
  • IS#550 - Add stack size category 'huge'
  • IS#549 - ShenEOS run seg-faults on single or distributed runs
  • IS#545 - AUTOGLOB broken for add_hpx_component
  • IS#542 - FindHPX_HDF5 still searches multiple times
  • IS#541 - Quotes around application name in hpx::init
  • IS#539 - Race conditition occuring with new lightweight threads
  • IS#535 - hpx_run_tests.py exits with no error code when tests are missing
  • IS#530 - Thread description(<unknown>) in logs
  • IS#523 - Make thread objects more lightweight
  • IS#521 - hpx::error_code is not usable for lightweight error handling
  • IS#520 - Add full user environment to HPX logs
  • IS#519 - Build succeeds, running fails
  • IS#517 - Add a guard page to linux coroutine stacks
  • IS#516 - hpx::thread::detach suspends while holding locks, leads to hang in debug
  • IS#514 - Preprocessed headers for <hpx/apply.hpp> don't compile
  • IS#513 - Buildbot configuration problem
  • IS#512 - Implement action based stack size customization
  • IS#511 - Move action priority into a separate type trait
  • IS#510 - trunk broken
  • IS#507 - no matching function for call to boost::scoped_ptr<hpx::threads::topology>::scoped_ptr(hpx::threads::linux_topology*)
  • IS#505 - undefined_symbol regression test currently failing
  • IS#502 - Adding OpenCL and OCLM support to HPX for Windows and Linux
  • IS#501 - find_package(HPX) sets cmake output variables
  • IS#500 - wait_any/wait_all are badly named
  • IS#499 - Add support for disabling pbs support in pbs runs
  • IS#498 - Error during no-cache runs
  • IS#496 - Add partial preprocessing support to cmake
  • IS#495 - Support HPX modules exporting startup/shutdown functions only
  • IS#494 - Allow modules to specify when to run startup/shutdown functions
  • IS#493 - Avoid constructing a string in make_success_code
  • IS#492 - Performance counter creation is no longer synchronized at startup
  • IS#491 - Performance counter creation is no longer synchronized at startup
  • IS#490 - Sheneos on_completed_bulk seg fault in distributed
  • IS#489 - compiling issue with g++44
  • IS#488 - Adding OpenCL and OCLM support to HPX for the MSVC platform
  • IS#487 - FindHPX.cmake problems
  • IS#485 - Change distributing_factory and binpacking_factory to use bulk creation
  • IS#484 - Change HPX_DONT_USE_PREPROCESSED_FILES to HPX_USE_PREPROCESSED_FILES
  • IS#483 - Memory counter for Windows
  • IS#479 - strange errors appear when requesting performance counters on multiple nodes
  • IS#477 - Create (global) timer for multi-threaded measurements
  • IS#472 - Add partial preprocessing using Wave
  • IS#471 - Segfault stack traces don't show up in release
  • IS#468 - External projects need to link with internal components
  • IS#462 - Startup/shutdown functions are called more than once
  • IS#458 - Consolidate hpx::util::high_resolution_timer and hpx::util::high_resolution_clock
  • IS#457 - index out of bounds in allgather_and_gate on 4 cores or more
  • IS#448 - Make HPX compile with clang
  • IS#447 - 'make tests' should execute tests on local installation
  • IS#446 - Remove SVN-related code from the codebase
  • IS#444 - race condition in smp
  • IS#441 - Patched Boost.Serialization headers should only be installed if needed
  • IS#439 - Components using HPX_REGISTER_STARTUP_MODULE fail to compile with MSVC
  • IS#436 - Verify that no locks are being held while threads are suspended
  • IS#435 - Installing HPX should not clobber existing Boost installation
  • IS#434 - Logging external component failed (Boost 1.50)
  • IS#433 - Runtime crash when building all examples
  • IS#432 - Dataflow hangs on 512 cores/64 nodes
  • IS#430 - Problem with distributing factory
  • IS#424 - File paths referring to XSL-files need to be properly escaped
  • IS#417 - Make dataflow LCOs work out of the box by using partial preprocessing
  • IS#413 - hpx_svnversion.py fails on Windows
  • IS#412 - Make hpx::error_code equivalent to hpx::exception
  • IS#398 - HPX clobbers out-of-tree application specific CMake variables (specifically CMAKE_BUILD_TYPE)
  • IS#394 - Remove code generating random port numbers for network
  • IS#378 - ShenEOS scaling issues
  • IS#354 - Create a coroutines wrapper for Boost.Context
  • IS#349 - Commandline option --localities=N/-lN should be necessary only on AGAS locality
  • IS#334 - Add auto_index support to cmake based documentation toolchain
  • IS#318 - Network benchmarks
  • IS#317 - Implement network performance counters
  • IS#310 - Duplicate logging entries
  • IS#230 - Add compile time option to disable thread debugging info
  • IS#171 - Add an INI option to turn off deadlock detection independently of logging
  • IS#170 - OSHL internal counters are incorrect
  • IS#103 - Better diagnostics for multiple component/action registerations under the same name
  • IS#48 - Support for Darwin (Xcode + Clang)
  • IS#21 - Build fails with GCC 4.6

We have had roughly 800 commits since the last release and we have closed approximately 80 tickets (bugs, feature requests, etc.).

General Changes
  • Significant improvements made to the usability of HPX in large-scale, distributed environments.
  • Renamed hpx::lcos::packaged_task<> to hpx::lcos::packaged_action<> to reflect the semantic differences to a packaged_task as defined by the C++11 Standard.
  • HPX now exposes hpx::thread which is compliant to the C++11 std::thread type except that it (purely locally) represents an HPX thread. This new type does not expose any of the remote capabilities of the underlying HPX-thread implementation.
  • The type hpx::lcos::future<> is now compliant to the C++11 std::future<> type. This type can be used to synchronize both, local and remote operations. In both cases the control flow will 'return' to the future in order to trigger any continuation.
  • The types hpx::lcos::local::promise<> and hpx::lcos::local::packaged_task<> are now compliant to the C++11 std::promise<> and std::packaged_task<> types. These can be used to create a future representing local work only. Use the types hpx::lcos::promise<> and hpx::lcos::packaged_action<> to wrap any (possibly remote) action into a future.
  • hpx::thread and hpx::lcos::future<> are now cancelable.
  • Added support for sequential and logic composition of hpx::lcos::future<>'s. The member function hpx::lcos::future::when() permits futures to be sequentially composed. The helper functions hpx::wait_all, hpx::wait_any, and hpx::wait_n can be used to wait for more than one future at a time.
  • HPX now exposes hpx::apply() and hpx::async() as the preferred way of creating (or invoking) any deferred work. These functions are usable with various types of functions, function objects, and actions and provide a uniform way to spawn deferred tasks.
  • HPX now utilizes hpx::util::bind to (partially) bind local functions and function objects, and also actions. Remote bound actions can have placeholders as well.
  • HPX continuations are now fully polymorphic. The class hpx::actions::forwarding_continuation is an example of how the user can write is own types of continuations. It can be used to execute any function as an continuation of a particular action.
  • Reworked the action invocation API to be fully conformant to normal functions. Actions can now be invoked using hpx::apply(), hpx::async(), or using the operator() implemented on actions. Actions themselves can now be cheaply instantiated as they do not have any members anymore.
  • Reworked the lazy action invocation API. Actions can now be directly bound using hpx::util::bind() by passing an action instance as the first argument.
  • A minimal HPX program now looks like this:

    #include <hpx/hpx_init.hpp>
    
    int hpx_main()
    {
        return hpx::finalize();
    }
    
    int main()
    {
        return hpx::init();
    }
    

    This removes the immediate dependency on the Boost.Program Options library.

[Note]Note

This minimal version of an HPX program does not support any of the default command line arguments (such as --help, or command line options related to PBS). It is suggested to always pass argc and argv to HPX as shown in the example below.

  • In order to support those, but still not to depend on Boost.Program Options, the minimal program can be written as:

    #include <hpx/hpx_init.hpp>
    
    // The arguments for hpx_main can be left off, which very similar to the
    // behavior of `main()` as defined by C++.
    int hpx_main(int argc, char* argv[])
    {
        return hpx::finalize();
    }
    
    int main(int argc, char* argv[])
    {
        return hpx::init(argc, argv);
    }
    
  • Added performance counters exposing the number of component instances which are alive on a given locality.
  • Added performance counters exposing then number of messages sent and received, the number of parcels sent and received, the number of bytes sent and received, the overall time required to send and receive data, and the overall time required to serialize and deserialize the data.
  • Added a new component: hpx::components::binpacking_factory which is equivalent to the existing hpx::components::distributing_factory component, except that it equalizes the overall population of the components to create. It exposes two factory methods, one based on the number of existing instances of the component type to create, and one based on an arbitrary performance counter which will be queried for all relevant localities.
  • Added API functions allowing to access elements of the diagnostic information embedded in the given exception: hpx::get_locality_id(), hpx::get_host_name(), hpx::get_process_id(), hpx::get_function_name(), hpx::get_file_name(), hpx::get_line_number(), hpx::get_os_thread(), hpx::get_thread_id(), and hpx::get_thread_description().
Bug Fixes (Closed Tickets)

Here is a list of the important tickets we closed for this release:

  • IS#71 - GIDs that are not serialized via handle_gid<> should raise an exception
  • IS#105 - Allow for hpx::util::functions to be registered in the AGAS symbolic namespace
  • IS#107 - Nasty threadmanger race condition (reproducible in sheneos_test)
  • IS#108 - Add millisecond resolution to HPX logs on Linux
  • IS#110 - Shutdown hang in distributed with release build
  • IS#116 - Don't use TSS for the applier and runtime pointers
  • IS#162 - Move local synchronous execution shortcut from hpx::function to the applier
  • IS#172 - Cache sources in CMake and check if they change manually
  • IS#178 - Add an INI option to turn off ranged-based AGAS caching
  • IS#187 - Support for disabling performance counter deployment
  • IS#202 - Support for sending performance counter data to a specific file
  • IS#218 - boost.coroutines allows different stack sizes, but stack pool is unaware of this
  • IS#231 - Implement movable boost::bind
  • IS#232 - Implement movable boost::function
  • IS#236 - Allow binding hpx::util::function to actions
  • IS#239 - Replace hpx::function with hpx::util::function
  • IS#240 - Can't specify RemoteResult with lcos::async
  • IS#242 - REGISTER_TEMPLATE support for plain actions
  • IS#243 - handle_gid<> support for hpx::util::function
  • IS#245 - *_c_cache code throws an exception if the queried GID is not in the local cache
  • IS#246 - Undefined references in dataflow/adaptive1d example
  • IS#252 - Problems configuring sheneos with CMake
  • IS#254 - Lifetime of components doesn't end when client goes out of scope
  • IS#259 - CMake does not detect that MSVC10 has lambdas
  • IS#260 - io_service_pool segfault
  • IS#261 - Late parcel executed outside of pxthread
  • IS#263 - Cannot select allocator with CMake
  • IS#264 - Fix allocator select
  • IS#267 - Runtime error for hello_world
  • IS#269 - pthread_affinity_np test fails to compile
  • IS#270 - Compiler noise due to -Wcast-qual
  • IS#275 - Problem with configuration tests/include paths on Gentoo
  • IS#325 - Sheneos is 200-400 times slower than the fortran equivalent
  • IS#331 - hpx::init() and hpx_main() should not depend on program_options
  • IS#333 - Add doxygen support to CMake for doc toolchain
  • IS#340 - Performance counters for parcels
  • IS#346 - Component loading error when running hello_world in distributed on MSVC2010
  • IS#362 - Missing initializer error
  • IS#363 - Parcel port serialization error
  • IS#366 - Parcel buffering leads to types incompatible exception
  • IS#368 - Scalable alternative to rand() needed for HPX
  • IS#369 - IB over IP is substantially slower than just using standard TCP/IP
  • IS#374 - hpx::lcos::wait should work with dataflows and arbitrary classes meeting the future interface
  • IS#375 - Conflicting/ambiguous overloads of hpx::lcos::wait
  • IS#376 - Find_HPX.cmake should set CMake variable HPX_FOUND for out of tree builds
  • IS#377 - ShenEOS interpolate bulk and interpolate_one_bulk are broken
  • IS#379 - Add support for distributed runs under SLURM
  • IS#382 - _Unwind_Word not declared in boost.backtrace
  • IS#387 - Doxygen should look only at list of specified files
  • IS#388 - Running make install on an out-of-tree application is broken
  • IS#391 - Out-of-tree application segfaults when running in qsub
  • IS#392 - Remove HPX_NO_INSTALL option from cmake build system
  • IS#396 - Pragma related warnings when compiling with older gcc versions
  • IS#399 - Out of tree component build problems
  • IS#400 - Out of source builds on Windows: linker should not receive compiler flags
  • IS#401 - Out of source builds on Windows: components need to be linked with hpx_serialization
  • IS#404 - gfortran fails to link automatically when fortran files are present
  • IS#405 - Inability to specify linking order for external libraries
  • IS#406 - Adapt action limits such that dataflow applications work without additional defines
  • IS#415 - locality_results is not a member of hpx::components::server
  • IS#425 - Breaking changes to traits::*result wrt std::vector<id_type>
  • IS#426 - AUTOGLOB needs to be updated to support fortran

This is a point release including important bug fixes for V0.8.0.

General Changes
  • HPX does not need to be installed anymore to be functional.
Bug Fixes (Closed Tickets)

Here is a list of the important tickets we closed for this point release:

  • IS#295 - Don't require install path to be known at compile time.
  • IS#371 - Add hpx iostreams to standard build.
  • IS#384 - Fix compilation with GCC 4.7.
  • IS#390 - Remove keep_factory_alive startup call from ShenEOS; add shutdown call to H5close.
  • IS#393 - Thread affinity control is broken.
Bug Fixes (Commits)

Here is a list of the important commits included in this point release:

  • r7642 - External: Fix backtrace memory violation.
  • r7775 - Components: Fix symbol visibility bug with component startup providers. This prevents one components providers from overriding another components.
  • r7778 - Components: Fix startup/shutdown provider shadowing issues.

We have had roughly 1000 commits since the last release and we have closed approximately 70 tickets (bugs, feature requests, etc.).

General Changes
  • Improved PBS support, allowing for arbitrary naming schemes of node-hostnames.
  • Finished verification of the reference counting framework.
  • Implemented decrement merging logic to optimize the distributed reference counting system.
  • Restructured the LCO framework. Renamed hpx::lcos::eager_future<> and hpx::lcos::lazy_future<> into hpx::lcos::packaged_task<> and hpx::lcos::deferred_packaged_task<>. Split hpx::lcos::promise<> into hpx::lcos::packaged_task<> and hpx::lcos::future<>. Added 'local' futures (in namespace hpx::lcos::local).
  • Improved the general performance of local and remote action invocations. This (under certain circumstances) drastically reduces the number of copies created for each of the parameters and return values.
  • Reworked the performance counter framework. Performance counters are now created only when needed, which reduces the overall resource requirements. The new framework allows for much more flexible creation and management of performance counters. The new sine example application demonstrates some of the capabilities of the new infrastructure.
  • Added a buildbot-based continuous build system which gives instant, automated feedback on each commit to SVN.
  • Added more automated tests to verify proper functioning of HPX.
  • Started to create documentation for HPX and its API.
  • Added documentation toolchain to the build system.
  • Added dataflow LCO.
  • Changed default HPX command line options to have hpx: prefix. For instance, the former option --threads is now --hpx:threads. This has been done to make ambiguities with possible application specific command line options as unlikely as possible. See the section HPX Command Line Options for a full list of available options.
  • Added the possibility to define command line aliases. The former short (one-letter) command line options have been predefined as aliases for backwards compatibility. See the section HPX Command Line Options for a detailed description of command line option aliasing.
  • Network connections are now cached based on the connected host. The number of simultaneous connections to a particular host is now limited. Parcels are buffered and bundled if all connections are in use.
  • Added more refined thread affinity control. This is based on the external library Portable Hardware Locality (HWLOC).
  • Improved support for Windows builds with CMake.
  • Added support for components to register their own command line options.
  • Added the possibility to register custom startup/shutdown functions for any component. These functions are guaranteed to be executed by an HPX thread.
  • Added two new experimental thread schedulers: hierarchy_scheduler and periodic_priority_scheduler. These can be activated by using the command line options --hpx:queueing=hierarchy or --hpx:queueing=periodic.
Example Applications
  • Graph500 performance benchmark (thanks to Matthew Anderson for contributing this application).
  • GTC (Gyrokinetic Toroidal Code): a skeleton for particle in cell type codes.
  • Random Memory Access: an example demonstrating random memory accesses in a large array
  • ShenEOS example, demonstrating partitioning of large read-only data structures and exposing an interpolation API.
  • Sine performance counter demo.
  • Accumulator examples demonstrating how to write and use HPX components.
  • Quickstart examples (like hello_world, fibonacci, quicksort, factorial, etc.) demonstrating simple HPX concepts which introduce some of the concepts in HPX.
  • Load balancing and work stealing demos.
API Changes
  • Moved all local LCOs into a separate namespace hpx::lcos::local (for instance, hpx::lcos::local_mutex is now hpx::lcos::local::mutex).
  • Replaced hpx::actions::function with hpx::util::function. Cleaned up related code.
  • Removed hpx::traits::handle_gid and moved handling of global reference counts into the corresponding serialization code.
  • Changed terminology: prefix is now called locality_id, renamed the corresponding API functions (such as hpx::get_prefix, which is now called hpx::get_locality_id).
  • Adding hpx::find_remote_localities(), and hpx::get_num_localities().
  • Changed performance counter naming scheme to make it more bash friendly. The new performance counter naming scheme is now
/object{parentname#parentindex/instance#index}/counter#parameters
  • Added hpx::get_worker_thread_num replacing hpx::threadmanager_base::get_thread_num.
  • Renamed hpx::get_num_os_threads to hpx::get_os_threads_count.
  • Added hpx::threads::get_thread_count.
  • Restructured the Futures sub-system, renaming types in accordance with the terminology used by the C++11 ISO standard.
Bug Fixes (Closed Tickets)

Here is a list of the important tickets we closed for this release:

  • IS#31 - Specialize handle_gid<> for examples and tests
  • IS#72 - Fix AGAS reference counting
  • IS#104 - heartbeat throws an exception when decrefing the performance counter it's watching
  • IS#111 - throttle causes an exception on the target application
  • IS#142 - One failed component loading causes an unrelated component to fail
  • IS#165 - Remote exception propagation bug in AGAS reference counting test
  • IS#186 - Test credit exhaustion/splitting (e.g. prepare_gid and symbol NS)
  • IS#188 - Implement remaining AGAS reference counting test cases
  • IS#258 - No type checking of GIDs in stubs classes
  • IS#271 - Seg fault/shared pointer assertion in distributed code
  • IS#281 - CMake options need descriptive text
  • IS#283 - AGAS caching broken (gva_cache needs to be rewritten with ICL)
  • IS#285 - HPX_INSTALL root directory not the same as CMAKE_INSTALL_PREFIX
  • IS#286 - New segfault in dataflow applications
  • IS#289 - Exceptions should only be logged if not handled
  • IS#290 - c++11 tests failure
  • IS#293 - Build target for component libraries
  • IS#296 - Compilation error with Boost V1.49rc1
  • IS#298 - Illegal instructions on termination
  • IS#299 - gravity aborts with multiple threads
  • IS#301 - Build error with Boost trunk
  • IS#303 - Logging assertion failure in distributed runs
  • IS#304 - Exception 'what' strings are lost when exceptions from decode_parcel are reported
  • IS#306 - Performance counter user interface issues
  • IS#307 - Logging exception in distributed runs
  • IS#308 - Logging deadlocks in distributed
  • IS#309 - Reference counting test failures and exceptions
  • IS#311 - Merge AGAS remote_interface with the runtime_support object
  • IS#314 - Object tracking for id_types
  • IS#315 - Remove handle_gid and handle credit splitting in id_type serialization
  • IS#320 - applier::get_locality_id() should return an error value (or throw an exception)
  • IS#321 - Optimization for id_types which are never split should be restored
  • IS#322 - Command line processing ignored with Boost 1.47.0
  • IS#323 - Credit exhaustion causes object to stay alive
  • IS#324 - Duplicate exception messages
  • IS#326 - Integrate Quickbook with CMake
  • IS#329 - --help and --version should still work
  • IS#330 - Create pkg-config files
  • IS#337 - Improve usability of performance counter timestamps
  • IS#338 - Non-std exceptions deriving from std::exceptions in tfunc may be sliced
  • IS#339 - Decrease the number of send_pending_parcels threads
  • IS#343 - Dynamically setting the stack size doesn't work
  • IS#351 - 'make install' does not update documents
  • IS#353 - Disable FIXMEs in the docs by default; add a doc developer CMake option to enable FIXMEs
  • IS#355 - 'make' doesn't do anything after correct configuration
  • IS#356 - Don't use hpx::util::static_ in topology code
  • IS#359 - Infinite recursion in hpx::tuple serialization
  • IS#361 - Add compile time option to disable logging completely
  • IS#364 - Installation seriously broken in r7443

We have had roughly 1000 commits since the last release and we have closed approximately 120 tickets (bugs, feature requests, etc.).

General Changes
  • Completely removed code related to deprecated AGAS V1, started to work on AGAS V2.1.
  • Started to clean up and streamline the exposed APIs (see 'API changes' below for more details).
  • Revamped and unified performance counter framework, added a lot of new performance counter instances for monitoring of a diverse set of internal HPX parameters (queue lengths, access statistics, etc.).
  • Improved general error handling and logging support.
  • Fixed several race conditions, improved overall stability, decreased memory footprint, improved overall performance (major optimizations include native TLS support and ranged-based AGAS caching).
  • Added support for running HPX applications with PBS.
  • Many updates to the build system, added support for gcc 4.5.x and 4.6.x, added C++11 support.
  • Many updates to default command line options.
  • Added many tests, set up buildbot for continuous integration testing.
  • Better shutdown handling of distributed applications.
Example Applications
  • quickstart/factorial and quickstart/fibonacci, future-recursive parallel algorithms.
  • quickstart/hello_world, distributed hello world example.
  • quickstart/rma, simple remote memory access example
  • quickstart/quicksort, parallel quicksort implementation.
  • gtc, gyrokinetic torodial code.
  • bfs, breadth-first-search, example code for a graph application.
  • sheneos, partitioning of large data sets.
  • accumulator, simple component example.
  • balancing/os_thread_num, balancing/px_thread_phase, examples demonstrating load balancing and work stealing.
API Changes
  • Added hpx::find_all_localities.
  • Added hpx::terminate for non-graceful termination of applications.
  • Added hpx::lcos::async functions for simpler asynchronous programming.
  • Added new AGAS interface for handling of symbolic namespace (hpx::agas::*).
  • Renamed hpx::components::wait to hpx::lcos::wait.
  • Renamed hpx::lcos::future_value to hpx::lcos::promise.
  • Renamed hpx::lcos::recursive_mutex to hpx::lcos::local_recursive_mutex, hpx::lcos::mutex to hpx::lcos::local_mutex
  • Removed support for Boost versions older than V1.38, recommended Boost version is now V1.47 and newer.
  • Removed hpx::process (this will be replaced by a real process implementation in the future).
  • Removed non-functional LCO code (hpx::lcos::dataflow, hpx::lcos::thunk, hpx::lcos::dataflow_variable).
  • Removed deprecated hpx::naming::full_address.
Bug Fixes (Closed Tickets)

Here is a list of the important tickets we closed for this release:

  • IS#28 - Integrate Windows/Linux CMake code for HPX core
  • IS#32 - hpx::cout() should be hpx::cout
  • IS#33 - AGAS V2 legacy client does not properly handle error_code
  • IS#60 - AGAS: allow for registerid to optionally take ownership of the gid
  • IS#62 - adaptive1d compilation failure in Fusion
  • IS#64 - Parcel subsystem doesn't resolve domain names
  • IS#83 - No error handling if no console is available
  • IS#84 - No error handling if a hosted locality is treated as the bootstrap server
  • IS#90 - Add general commandline option -N
  • IS#91 - Add possibility to read command line arguments from file
  • IS#92 - Always log exceptions/errors to the log file
  • IS#93 - Log the command line/program name
  • IS#95 - Support for distributed launches
  • IS#97 - Attempt to create a bad component type in AMR examples
  • IS#100 - factorial and factorial_get examples trigger AGAS component type assertions
  • IS#101 - Segfault when hpx::process::here() is called in fibonacci2
  • IS#102 - unknown_component_address in int_object_semaphore_client
  • IS#114 - marduk raises assertion with default parameters
  • IS#115 - Logging messages for SMP runs (on the console) shouldn't be buffered
  • IS#119 - marduk linking strategy breaks other applications
  • IS#121 - pbsdsh problem
  • IS#123 - marduk, dataflow and adaptive1d fail to build
  • IS#124 - Lower default preprocessing arity
  • IS#125 - Move hpx::detail::diagnostic_information out of the detail namespace
  • IS#126 - Test definitions for AGAS reference counting
  • IS#128 - Add averaging performance counter
  • IS#129 - Error with endian.hpp while building adaptive1d
  • IS#130 - Bad initialization of performance counters
  • IS#131 - Add global startup/shutdown functions to component modules
  • IS#132 - Avoid using auto_ptr
  • IS#133 - On Windows hpx.dll doesn't get installed
  • IS#134 - HPX_LIBRARY does not reflect real library name (on Windows)
  • IS#135 - Add detection of unique_ptr to build system
  • IS#137 - Add command line option allowing to repeatedly evaluate performance counters
  • IS#139 - Logging is broken
  • IS#140 - CMake problem on windows
  • IS#141 - Move all non-component libraries into $PREFIX/lib/hpx
  • IS#143 - adaptive1d throws an exception with the default command line options
  • IS#146 - Early exception handling is broken
  • IS#147 - Sheneos doesn't link on Linux
  • IS#149 - sheneos_test hangs
  • IS#154 - Compilation fails for r5661
  • IS#155 - Sine performance counters example chokes on chrono headers
  • IS#156 - Add build type to --version
  • IS#157 - Extend AGAS caching to store gid ranges
  • IS#158 - r5691 doesn't compile
  • IS#160 - Re-add AGAS function for resolving a locality to its prefix
  • IS#168 - Managed components should be able to access their own GID
  • IS#169 - Rewrite AGAS future pool
  • IS#179 - Complete switch to request class for AGAS server interface
  • IS#182 - Sine performance counter is loaded by other examples
  • IS#185 - Write tests for symbol namespace reference counting
  • IS#191 - Assignment of read-only variable in point_geometry
  • IS#200 - Seg faults when querying performance counters
  • IS#204 - --ifnames and suffix stripping needs to be more generic
  • IS#205 - --list-* and --print-counter-* options do not work together and produce no warning
  • IS#207 - Implement decrement entry merging
  • IS#208 - Replace the spinlocks in AGAS with hpx::lcos::local_mutexes
  • IS#210 - Add an --ifprefix option
  • IS#214 - Performance test for PX-thread creation
  • IS#216 - VS2010 compilation
  • IS#222 - r6045 context_linux_x86.hpp
  • IS#223 - fibonacci hangs when changing the state of an active thread
  • IS#225 - Active threads end up in the FEB wait queue
  • IS#226 - VS Build Error for Accumulator Client
  • IS#228 - Move all traits into namespace hpx::traits
  • IS#229 - Invalid initialization of reference in thread_init_data
  • IS#235 - Invalid GID in iostreams
  • IS#238 - Demangle type names for the default implementation of get_action_name
  • IS#241 - C++11 support breaks GCC 4.5
  • IS#247 - Reference to temporary with GCC 4.4
  • IS#248 - Seg fault at shutdown with GCC 4.4
  • IS#253 - Default component action registration kills compiler
  • IS#272 - G++ unrecognized command line option
  • IS#273 - quicksort example doesn't compile
  • IS#277 - Invalid CMake logic for Windows
Welcome

Welcome to the HPX runtime system libraries! By the time you've completed this tutorial, you'll be at least somewhat comfortable with HPX and how to go about using it.

What's Here

This document is designed to be an extremely gentle introduction, so we included a fair amount of material that may already be very familiar to you. To keep things simple, we also left out some information intermediate and advanced users will probably want. At the end of this document, we'll refer you to resources that can help you pursue these topics further.

Most HPX applications are executed on parallel computers. These platforms typically provide integrated job management services that facilitate the allocation of computing resources for each parallel program. HPX includes out of the box support for one of the most common job management systems, the Portable Batch System (PBS).

All PBS jobs require a script to specify the resource requirements and other parameters associated with a parallel job. The PBS script is basically a shell script with PBS directives placed within commented sections at the beginning of the file. The remaining (not commented-out) portions of the file executes just like any other regular shell script. While the description of all available PBS options is outside the scope of this tutorial (the interested reader may refer to in-depth documentation for more information), below is a minimal example to illustrate the approach. As a test application we will use the multithreaded hello_world program, explained in the section Hello World Example.

#!/bin/bash
#
#PBS -l nodes=2:ppn=4

APP_PATH=~/packages/hpx/bin/hello_world
APP_OPTIONS=

pbsdsh -u $APP_PATH $APP_OPTIONS --hpx:nodes=`cat $PBS_NODEFILE`
[Caution]Caution

If the first application specific argument (inside $APP_OPTIONS) is a non-option (i.e. does not start with a '-' or a '--', then those have to be placed before the option --hpx:nodes, which in this case should be the last option on the command line.

Alternatively, use the option --hpx:endnodes to explicitly mark the end of the list of node names:

pbsdsh -u $APP_PATH --hpx:nodes=`cat $PBS_NODEFILE` --hpx:endnodes $APP_OPTIONS

The #PBS -l nodes=2:ppn=4 directive will cause two compute nodes to be allocated for the application, as specified in the option nodes. Each of the nodes will dedicate four cores to the program, as per the option ppn, short for "processors per node" (PBS does not distinguish between processors and cores). Note that requesting more cores per node than physically available is pointless and may prevent PBS from accepting the script.

On newer PBS versions the PBS command syntax might be different. For instance, the PBS script above would look like:

#!/bin/bash
#
#PBS -l select=2:ncpus=4

APP_PATH=~/packages/hpx/bin/hello_world
APP_OPTIONS=

pbsdsh -u $APP_PATH $APP_OPTIONS --hpx:nodes=`cat $PBS_NODEFILE`

APP_PATH and APP_OPTIONS are shell variables that respectively specify the correct path to the executable (hello_world in this case) and the command line options. Since the hello_world application doesn't need any command line options, APP_OPTIONS has been left empty. Unlike in other execution environments, there is no need to use the --hpx:threads option to indicate the required number of OS threads per node; the HPX library will derive this parameter automatically from PBS.

Finally, pbsdsh is a PBS command that starts tasks to the resources allocated to the current job. It is recommended to leave this line as shown and modify only the PBS options and shell variables as needed for a specific application.

[Important]Important

A script invoked by pbsdsh starts in a very basic environment: the user's $HOME directory is defined and is the current directory, the LANG variable is set to C, and the PATH is set to the basic /usr/local/bin:/usr/bin:/bin as defined in a system-wide file pbs_environment. Nothing that would normally be set up by a system shell profile or user shell profile is defined, unlike the environment for the main job script.

Another choice is for the pbsdsh command in your main job script to invoke your program via a shell, like sh or bash, so that it gives an initialized environment for each instance. We create a small script runme.sh which is used to invoke the program:

#!/bin/bash
# Small script which invokes the program based on what was passed on its
# command line.
#
# This script is executed by the bash shell which will initialize all
# environment variables as usual.
$@

Now, we invoke this script using the pbsdsh tool:

#!/bin/bash
#
#PBS -l nodes=2:ppn=4

APP_PATH=~/packages/hpx/bin/hello_world
APP_OPTIONS=

pbsdsh -u runme.sh $APP_PATH $APP_OPTIONS --hpx:nodes=`cat $PBS_NODEFILE`

All that remains now is submitting the job to the queuing system. Assuming that the contents of the PBS script were saved in file pbs_hello_world.sh in the current directory, this is accomplished by typing:

qsub ./pbs_hello_world_pbs.sh

If the job is accepted, qsub will print out the assigned job ID, which may look like:

$ 42.supercomputer.some.university.edu

To check the status of your job, issue the following command:

qstat 42.supercomputer.some.university.edu

and look for a single-letter job status symbol. The common cases include:

  • Q - signifies that the job is queued and awaiting its turn to be executed.
  • R - indicates that the job is currently running.
  • C - means that the job has completed.

The example qstat output below shows a job waiting for execution resources to become available:

Job id                    Name             User            Time Use S Queue
------------------------- ---------------- --------------- -------- - -----
42.supercomputer          ...ello_world.sh joe_user               0 Q batch

After the job completes, PBS will place two files, pbs_hello_world.sh.o42 and pbs_hello_world.sh.e42, in the directory where the job was submitted. The first contains the standard output and the second contains the standard error from all the nodes on which the application executed. In our example, the error output file should be empty and standard output file should contain something similar to:

hello world from OS-thread 3 on locality 0
hello world from OS-thread 2 on locality 0
hello world from OS-thread 1 on locality 1
hello world from OS-thread 0 on locality 0
hello world from OS-thread 3 on locality 1
hello world from OS-thread 2 on locality 1
hello world from OS-thread 1 on locality 0
hello world from OS-thread 0 on locality 1

Congratulations! You have just run your first distributed HPX application!

Just like PBS (described in section Using PBS), SLURM is a job management system which is widely used on large supercomputing systems. Any HPX application can easily be run using SLURM. This section describes how this can be done.

The easiest way to run an HPX application using SLURM is to utilize the command line tool srun which interacts with the SLURM batch scheduling system.

srun -p <partition> -N <number-of-nodes> hpx-application <application-arguments>

Here, <partition> is one of the node partitions existing on the target machine (consult the machines documentation to get a list of existing partitions) and <number-of-nodes> is the number of compute nodes you want to use. By default, the HPX application is started with one locality per node and uses all available cores on a node. You can change the number of localities started per node (for example to account for NUMA effects) by specifying the -n option of srun. The number of cores per locality can be set by -c. The <application-arguments> are any application specific arguments which need to passed on to the application.

[Note]Note

There is no need to use any of the HPX command line options related to the number of localities, number of threads, or related to networking ports. All of this information is automatically extracted from the SLURM environment by the HPX startup code.

[Important]Important

The srun documentation explicitly states: "If -c is specified without -n, as many tasks will be allocated per node as possible while satisfying the -c restriction. For instance on a cluster with 8 CPUs per node, a job request for 4 nodes and 3 CPUs per task may be allocated 3 or 6 CPUs per node (1 or 2 tasks per node) depending upon resource consumption by other jobs." For this reason, we suggest to always specify -n <number-of-instances>, even if <number-of-instances> is equal to one (1).

Interactive Shells

To get an interactive development shell on one of the nodes you can issue the following command:

srun -p <node-type> -N <number-of-nodes> --pty /bin/bash -l

After the shell has been opened, you can run your HPX application. By default, it uses all available cores. Note that if you requested one node, you don't need to do srun again. However, if you requested more than one nodes, and want to run your distributed application, you can use srun again to start up the distributed HPX application. It will use the resources that have been requested for the interactive shell.

Scheduling Batch Jobs

The above mentioned method of running HPX applications is fine for development purposes. The disadvantage that comes with srun is that it only returns once the application is finished. This might not be appropriate for longer running applications (for example benchmarks or larger scale simulations). In order to cope with that limitation you can use the sbatch command.

The sbatch command expects a script that it can run once the requested resources are available. In order to request resources you need to add #SBATCH comments in your script or provide the necessary parameters to sbatch directly. The parameters are the same as with srun. The commands you need to execute are the same you would need to start your application as if you were in an interactive shell.

Current advances in high performance computing (HPC) continue to suffer from the issues plaguing parallel computation. These issues include, but are not limited to, ease of programming, inability to handle dynamically changing workloads, scalability, and efficient utilization of system resources. Emerging technological trends such as multi-core processors further highlight limitations of existing parallel computation models. To mitigate the aforementioned problems, it is necessary to rethink the approach to parallelization models. ParalleX contains mechanisms such as multi-threading, parcels, global name space support, percolation and local control objects (LCO). By design, ParalleX overcomes limitations of current models of parallelism by alleviating contention, latency, overhead and starvation. With ParalleX, it is further possible to increase performance by at least an order of magnitude on challenging parallel algorithms, e.g., dynamic directed graph algorithms and adaptive mesh refinement methods for astrophysics. An additional benefit of ParalleX is fine-grained control of power usage, enabling reductions in power consumption.

ParalleX - a new Execution Model for Future Architectures

ParalleX is a new parallel execution model that offers an alternative to the conventional computation models, such as message passing. ParalleX distinguishes itself by:

  • Split-phase transaction model
  • Message-driven
  • Distributed shared memory (not cache coherent)
  • Multi-threaded
  • Futures synchronization
  • Local Control Objects (LCOs)
  • Synchronization for anonymous producer-consumer scenarios
  • Percolation (pre-staging of task data)

The ParalleX model is intrinsically latency hiding, delivering an abundance of variable-grained parallelism within a hierarchical namespace environment. The goal of this innovative strategy is to enable future systems delivering very high efficiency, increased scalability and ease of programming. ParalleX can contribute to significant improvements in the design of all levels of computing systems and their usage from application algorithms and their programming languages to system architecture and hardware design together with their supporting compilers and operating system software.

What is HPX

High Performance ParalleX (HPX) is the first runtime system implementation of the ParalleX execution model. The HPX runtime software package is a modular, feature-complete, and performance oriented representation of the ParalleX execution model targeted at conventional parallel computing architectures such as SMP nodes and commodity clusters. It is academically developed and freely available under an open source license. We provide HPX to the community for experimentation and application to achieve high efficiency and scalability for dynamic adaptive and irregular computational problems. HPX is a C++ library that supports a set of critical mechanisms for dynamic adaptive resource management and lightweight task scheduling within the context of a global address space. It is solidly based on many years of experience in writing highly parallel applications for HPC systems.

The two-decade success of the communicating sequential processes (CSP) execution model and its message passing interface (MPI) programming model has been seriously eroded by challenges of power, processor core complexity, multi-core sockets, and heterogeneous structures of GPUs. Both efficiency and scalability for some current (strong scaled) applications and future Exascale applications demand new techniques to expose new sources of algorithm parallelism and exploit unused resources through adaptive use of runtime information.

The ParalleX execution model replaces CSP to provide a new computing paradigm embodying the governing principles for organizing and conducting highly efficient scalable computations greatly exceeding the capabilities of today's problems. HPX is the first practical, reliable, and performance-oriented runtime system incorporating the principal concepts of the ParalleX model publicly provided in open source release form.

HPX is designed by the STE||AR Group (Systems Technology, Emergent Parallelism, and Algorithm Research) at Louisiana State University (LSU)'s Center for Computation and Technology (CCT) to enable developers to exploit the full processing power of many-core systems with an unprecedented degree of parallelism. STE||AR is a research group focusing on system software solutions and scientific application development for hybrid and many-core hardware architectures.

For more information about the STE||AR Group, see People.

Estimates say that we currently run our computers at way below 100% efficiency. The theoretical peak performance (usually measured in FLOPS - floating point operations per second) is much higher than any practical peak performance reached by any application. This is particularly true for highly parallel hardware. The more hardware parallelism we provide to an application, the better the application must scale in order to efficiently use all the resources of the machine. Roughly speaking, we distinguish two forms of scalability: strong scaling (see Amdahl's Law) and weak scaling (see Gustafson's Law). Strong scaling is defined as how the solution time varies with the number of processors for a fixed total problem size. It gives an estimate of how much faster can we solve a particular problem by throwing more resources at it. Weak scaling is defined as how the solution time varies with the number of processors for a fixed problem size per processor. In other words, it defines how much more data can we process by using more hardware resources.

In order to utilize as much hardware parallelism as possible an application must exhibit excellent strong and weak scaling characteristics, which requires a high percentage of work executed in parallel, i.e. using multiple threads of execution. Optimally, if you execute an application on a hardware resource with N processors it either runs N times faster or it can handle N times more data. Both cases imply 100% of the work is executed on all available processors in parallel. However, this is just a theoretical limit. Unfortunately, there are more things which limit scalability, mostly inherent to the hardware architectures and the programming models we use. We break these limitations into four fundamental factors which make our systems SLOW:

  • Starvation occurs when there is insufficient concurrent work available to maintain high utilization of all resources.
  • Latencies are imposed by the time-distance delay intrinsic to accessing remote resources and services.
  • Overhead is work required for the management of parallel actions and resources on the critical execution path which is not necessary in a sequential variant.
  • Waiting for contention resolution is the delay due to the lack of availability of oversubscribed shared resources.

Each of those four factors manifests itself in multiple and different ways; each of the hardware architectures and programming models expose specific forms. However the interesting part is that all of them are limiting the scalability of applications no matter what part of the hardware jungle we look at. Hand-helds, PCs, supercomputers, or the cloud, all suffer from the reign of the 4 horsemen: Starvation, Latency, Overhead, and Contention. This realization is very important as it allows us to derive the criteria for solutions to the scalability problem from first principles, it allows us to focus our analysis on very concrete patterns and measurable metrics. Moreover, any derived results will be applicable to a wide variety of targets.

Today's computer systems are designed based on the initial ideas of John von Neumann, as published back in 1945, and later extended by the Harvard architecture. These ideas form the foundation, the execution model of computer systems we use currently. But apparently a new response is required in the light of the demands created by today's technology.

So, what are the overarching objectives for designing systems allowing for applications to scale as they should? In our opinion, the main objectives are:

  • Performance: as mentioned, scalable and efficiency are the main criteria people are interested in
  • Fault tolerance: the low expected mean time between failures (MTBF) of future systems requires to embrace faults, not trying to avoid them
  • Power: minimizing energy consumption is a must as it is one of the major cost factors today, even more so in the future
  • Generality: any system should be usable for a broad set of use cases
  • Programmability: for me as a programmer this is a very important objective, ensuring long term platform stability and portability

What needs to be done to meet those objectives, to make applications scale better on tomorrow's architectures? Well, the answer is almost obvious: we need to devise a new execution model - a set of governing principles for the holistic design of future systems - targeted at minimizing the effect of the outlined SLOW factors. Everything we create for future systems, every design decision we make, every criteria we apply, has to be validated against this single, uniform metric. This includes changes in the hardware architecture we prevalently use today, and it certainly involves new ways of writing software, starting from the operating system, runtime system, compilers, and at the application level. However the key point is that all those layers have to be co-designed, they are interdependent and cannot be seen as separate facets. The systems we have today have been evolving for over 50 years now. All layers function in a certain way relying on the other layers to do so as well. However, we do not have the time to wait for a coherent system to evolve for another 50 years. The new paradigms are needed now - therefore, co-design is the key.

As it turn out, we do not have to start from scratch. Not everything has to be invented and designed anew. Many of the ideas needed to combat the 4 horsemen have already been had, often more than 30 years ago. All it takes is to gather them into a coherent approach. So please let me highlight some of the derived principles we think to be crucial for defeating SLOW. Some of those are focused on high-performance computing, others are more general.

Focus on Latency Hiding instead of Latency Avoidance

It is impossible to design a system exposing zero latencies. In an effort to come as close as possible to this goal many optimizations are mainly targeted towards minimizing latencies. Examples for this can be seen everywhere, for instance low latency network technologies like InfiniBand, caching memory hierarchies in all modern processors, the constant optimization of existing MPI implementations to reduce related latencies, or the data transfer latencies intrinsic to the way we use GPGPUs today. It is important to note, that existing latencies are often tightly related to some resource having to wait for the operation to be completed. At the same time it would be perfectly fine to do some other, unrelated work in the meantime, allowing to hide the latencies by filling the idle-time with useful work. Modern system already employ similar techniques (pipelined instruction execution in the processor cores, asynchronous input/output operations, and many more). What we propose is to go beyond anything we know today and to make latency hiding an intrinsic concept of the operation of the whole system stack.

Embrace Fine-grained Parallelism instead of Heavyweight Threads

If we plan to hide latencies even for very short operations, such as fetching the contents of a memory cell from main memory (if it is not already cached), we need to have very lightweight threads with extremely short context switching times, optimally executable within one cycle. Granted, for mainstream architectures this is not possible today (even if we already have special machines supporting this mode of operation, such as the Cray XMT). For conventional systems however, the smaller the overhead of a context switch and the finer the granularity of the threading system, the better will be the overall system utilization and its efficiency. For today's architectures we already see a flurry of libraries providing exactly this type of functionality: non-preemptive, task-queue based parallelization solutions, such as Intel Threading Building Blocks (TBB), Microsoft Parallel Patterns Library (PPL), Cilk++, and many others. The possibility to suspend a current task if some preconditions for its execution are not met (such as waiting for I/O or the result of a different task), seamlessly switching to any other task which can continue, and to reschedule the initial task after the required result has been calculated, which makes the implementation of latency hiding almost trivial.

Rediscover Constrained Based Synchronization to replace Global Barriers

The code we write today is riddled with implicit (and explicit) global barriers. When I say global barrier I mean the synchronization of the control flow between several (very often all) threads (when using OpenMP) or processes (MPI). For instance, an implicit global barrier is inserted after each loop parallelized using OpenMP as the system synchronizes the threads used to execute the different iterations in parallel. In MPI each of the communication steps imposes an explicit barrier onto the execution flow as (often all) nodes have to be synchronized. Each of those barriers acts as an eye of the needle the overall execution is forced to be squeezed through. Even minimal fluctuations in the execution times of the parallel threads (jobs) causes them to wait. Additionally it is often only one of the threads executing doing the actual reduce operation, which further impedes parallelism. A closer analysis of a couple of key algorithms used in science applications reveals that these global barriers are not always necessary. In many cases it is sufficient to synchronize a small subset of the threads. Any operation should proceed whenever the preconditions for its execution are met, and only those. Usually there is no need to wait for iterations of a loop to finish before you could continue calculating other things, all you need is to have those iterations done which were producing the required results for a particular next operation. Good bye global barriers, hello constraint based synchronization! People have been trying to build this type of computing (and even computers) already back in the 1970's. The theory behind what they did is based on ideas around static and dynamic dataflow. There are certain attempts today to get back to those ideas and to incorporate them with modern architectures. For instance, a lot of work is being done in the area of constructing dataflow oriented execution trees. Our results show that employing dataflow techniques in combination with the other ideas, as outlined herein, considerably improves scalability for many problems.

Adaptive Locality Control instead of Static Data Distribution

While this principle seems to be a given for single desktop or laptop computers (the operating system is your friend), it is everything but ubiquitous on modern supercomputers, which are usually built from a large number of separate nodes (i.e. Beowulf clusters), tightly interconnected by a high bandwidth, low latency network. Today's prevalent programming model for those is MPI which does not directly help with proper data distribution, leaving it to the programmer to decompose the data to all of the nodes the application is running on. There are a couple of specialized languages and programming environments based on PGAS (Partitioned Global Address Space) designed to overcome this limitation, such as Chapel, X10, UPC, or Fortress. However all systems based on PGAS rely on static data distribution. This works fine as long as such a static data distribution does not result in inhomogeneous workload distributions or other resource utilization imbalances. In a distributed system these imbalances can be mitigated by migrating part of the application data to different localities (nodes). The only framework supporting (limited) migration today is Charm++. The first attempts towards solving related problem go back decades as well, a good example is the Linda coordination language. Nevertheless, none of the other mentioned systems support data migration today, which forces the users to either rely on static data distribution and live with the related performance hits or to implement everything themselves, which is very tedious and difficult. We believe that the only viable way to flexibly support dynamic and adaptive locality control is to provide a global, uniform address space to the applications, even on distributed systems.

Prefer Moving Work to the Data over Moving Data to the Work

For best performance it seems obvious to minimize the amount of bytes transferred from one part of the system to another. This is true on all levels. At the lowest level we try to take advantage of processor memory caches, thus minimizing memory latencies. Similarly, we try to amortize the data transfer time to and from GPGPUs as much as possible. At high levels we try to minimize data transfer between different nodes of a cluster or between different virtual machines on the cloud. Our experience (well, it's almost common wisdom) show that the amount of bytes necessary to encode a certain operation is very often much smaller than the amount of bytes encoding the data the operation is performed upon. Nevertheless we still often transfer the data to a particular place where we execute the operation just to bring the data back to where it came from afterwards. As an example let me look at the way we usually write our applications for clusters using MPI. This programming model is all about data transfer between nodes. MPI is the prevalent programming model for clusters, it is fairly straightforward to understand and to use. Therefore, we often write the applications in a way accommodating this model, centered around data transfer. These applications usually work well for smaller problem sizes and for regular data structures. The larger the amount of data we have to churn and the more irregular the problem domain becomes, the worse are the overall machine utilization and the (strong) scaling characteristics. While it is not impossible to implement more dynamic, data driven, and asynchronous applications using MPI, it is overly difficult to so. At the same time, if we look at applications preferring to execute the code close the locality where the data was placed, i.e. utilizing active messages (for instance based on Charm++), we see better asynchrony, simpler application codes, and improved scaling.

Favor Message Driven Computation over Message Passing

Today's prevalently used programming model on parallel (multi-node) systems is MPI. It is based on message passing (as the name implies), which means that the receiver has to be aware of a message about to come in. Both codes, the sender and the receiver, have to synchronize in order to perform the communication step. Even the newer, asynchronous interfaces require to explicitly code the algorithms around the required communication scheme. As a result, any more than trivial MPI application spends a considerable amount of time waiting for incoming messages, thus causing starvation and latencies to impede full resource utilization. The more complex and more dynamic the data structures and algorithms become, the larger are the adverse effects. The community has discovered message-driven and (data-driven) methods of implementing algorithms a long time ago, and systems such as Charm++ already have integrated active messages demonstrating the validity of the concept. Message driven computation allows to send messages without that the receiver has to actively wait for them. Any incoming message is handled asynchronously and triggers the encoded action by passing along arguments and - possibly - continuations. HPX combines this scheme with work queue based scheduling as described above, which allows to almost completely overlap any communication with useful work, reducing latencies to a minimum.

The following sections of our tutorial analyzes some examples to help you get familiar with the HPX style of programming. We start off with simple examples that utilize basic HPX elements and then begin to expose the reader to the more complex, yet powerful, HPX concepts.

[Note]Note

The instructions for building and running the examples currently only cover Unix variants.

The Fibonacci sequence is a sequence of numbers starting with 0 and 1 where every subsequent number is the sum of the previous two numbers. In this example, we will use HPX to calculate the value of the n-th element of the Fibonacci sequence. In order to compute this problem in parallel, we will use a facility known as a Future.

As shown in the figure below, a Future encapsulates a delayed computation. It acts as a proxy for a result initially not known, most of the time because the computation of the result has not completed yet. The Future synchronizes the access of this value by optionally suspending any HPX-threads requesting the result until the value is available. When a Future is created, it spawns a new HPX-thread (either remotely with a parcel or locally by placing it into the thread queue) which, when run, will execute the action associated with the Future. The arguments of the action are bound when the Future is created.

Figure 1. Schematic of a Future execution

Schematic of a Future execution


Once the action has finished executing, a write operation is performed on the Future. The write operation marks the Future as completed, and optionally stores data returned by the action. When the result of the delayed computation is needed, a read operation is performed on the Future. If the Future's action hasn't completed when a read operation is performed on it, the reader HPX-thread is suspended until the Future is ready. The Future facility allows HPX to schedule work early in a program so that when the function value is needed it will already be calculated and available. We use this property in our Fibonacci example below to enable its parallel execution.

Setup

The source code for this example can be found here: fibonacci.cpp.

To compile this program, go to your HPX build directory (see Getting Started for information on configuring and building HPX) and enter:

make examples.quickstart.fibonacci

To run the program type:

./bin/fibonacci

This should print (time should be approximate):

fibonacci(10) == 55
elapsed time: 0.00186288 [s]

This run used the default settings, which calculate the tenth element of the Fibonacci sequence. To declare which Fibonacci value you want to calculate, use the --n-value option. Additionally you can use the --hpx:threads option to declare how many OS-threads you wish to use when running the program. For instance, running:

./bin/fibonacci --n-value 20 --hpx:threads 4

Will yield:

fibonacci(20) == 6765
elapsed time: 0.233827 [s]
Walkthrough

Now that you have compiled and run the code, let's look at how the code works. Since this code is written in C++, we will begin with the main() function. Here you can see that in HPX, main() is only used to initialize the runtime system. It is important to note that application-specific command line options are defined here. HPX uses Boost.Program Options for command line processing. You can see that our programs --n-value option is set by calling the add_options() method on an instance of boost::program_options::options_description. The default value of the variable is set to 10. This is why when we ran the program for the first time without using the --n-value option the program returned the 10th value of the Fibonacci sequence. The constructor argument of the description is the text that appears when a user uses the --help option to see what command line options are available. HPX_APPLICATION_STRING is a macro that expands to a string constant containing the name of the HPX application currently being compiled.

In HPX main() is used to initialize the runtime system and pass the command line arguments to the program. If you wish to add command line options to your program you would add them here using the instance of the Boost class options_description, and invoking the public member function .add_options() (see Boost Documentation or the Fibonacci Example for more details). hpx::init() calls hpx_main() after setting up HPX, which is where the logic of our program is encoded.

int main(int argc, char* argv[])
{
    // Configure application-specific options
    boost::program_options::options_description
       desc_commandline("Usage: " HPX_APPLICATION_STRING " [options]");

    desc_commandline.add_options()
        ( "n-value",
          boost::program_options::value<boost::uint64_t>()->default_value(10),
          "n value for the Fibonacci function")
        ;

    // Initialize and run HPX
    return hpx::init(desc_commandline, argc, argv);
}

The hpx::init() function in main() starts the runtime system, and invokes hpx_main() as the first HPX-thread. Below we can see that the basic program is simple. The command line option --n-value is read in, a timer (hpx::util::high_resolution_timer) is set up to record the time it takes to do the computation, the fibonacci action is invoked synchronously, and the answer is printed out.

int hpx_main(boost::program_options::variables_map& vm)
{
    // extract command line argument, i.e. fib(N)
    boost::uint64_t n = vm["n-value"].as<boost::uint64_t>();

    {
        // Keep track of the time required to execute.
        hpx::util::high_resolution_timer t;

        // Wait for fib() to return the value
        fibonacci_action fib;
        boost::uint64_t r = fib(hpx::find_here(), n);

        char const* fmt = "fibonacci(%1%) == %2%\nelapsed time: %3% [s]\n";
        std::cout << (boost::format(fmt) % n % r % t.elapsed());
    }

    return hpx::finalize(); // Handles HPX shutdown
}

Upon a closer look we see that we've created a boost::uint64_t to store the result of invoking our fibonacci_action fib. This action will launch synchronously ( as the work done inside of the action will be asynchronous itself) and return the result of the fibonacci sequence. But wait, what is an action? And what is this fibonacci_action? For starters, an action is a wrapper for a function. By wrapping functions, HPX can send packets of work to different processing units. These vehicles allow users to calculate work now, later, or on certain nodes. The first argument to our action is the location where the action should be run. In this case, we just want to run the action on the machine that we are currently on, so we use hpx::find_here(). The second parameter simply forward the fibonacci sequence n that we wish to calculate. To further understand this we turn to the code to find where fibonacci_action was defined:

// forward declaration of the Fibonacci function
boost::uint64_t fibonacci(boost::uint64_t n);

// This is to generate the required boilerplate we need for the remote
// invocation to work.
HPX_PLAIN_ACTION(fibonacci, fibonacci_action);

A plain action is the most basic form of action. Plain actions wrap simple global functions which are not associated with any particular object (we will discuss other types of actions in the Accumulator Example). In this block of code the function fibonacci() is declared. After the declaration, the function is wrapped in an action in the declaration HPX_PLAIN_ACTION. This function takes two arguments: the name of the function that is to be wrapped and the name of the action that you are creating.

This picture should now start making sense. The function fibonacci() is wrapped in an action fibonacci_action, which was run synchronously but created asynchronous work, then returns a boost::uint64_t representing the result of the function fibonacci(). Now, lets look at the function fibonacci():

boost::uint64_t fibonacci(boost::uint64_t n)
{
    if (n < 2)
        return n;

    // We restrict ourselves to execute the Fibonacci function locally.
    hpx::naming::id_type const locality_id = hpx::find_here();

    // Invoking the Fibonacci algorithm twice is inefficient.
    // However, we intentionally demonstrate it this way to create some
    // heavy workload.

    fibonacci_action fib;
    hpx::future<boost::uint64_t> n1 =
        hpx::async(fib, locality_id, n - 1);
    hpx::future<boost::uint64_t> n2 =
        hpx::async(fib, locality_id, n - 2);

    return n1.get() + n2.get();   // wait for the Futures to return their values
}

This block of code is much more straightforward. First, if (n < 2), meaning n is 0 or 1, then we return 0 or 1 (recall the first element of the Fibonacci sequence is 0 and the second is 1). If n is larger than 1, then we spawn two futures, n1 and n2. Each of these futures represents an asynchronous, recursive call to fibonacci(). After we've created both futures, we wait for both of them to finish computing, and then we add them together, and return that value as our result. The recursive call tree will continue until n is equal to 0 or 1, at which point the value can be returned because it is implicitly known. When this termination condition is reached, the futures can then be added up, producing the n-th value of the Fibonacci sequence.

This program will print out a hello world message on every OS-thread on every locality. The output will look something like this:

hello world from OS-thread 1 on locality 0
hello world from OS-thread 1 on locality 1
hello world from OS-thread 0 on locality 0
hello world from OS-thread 0 on locality 1
Setup

The source code for this example can be found here: hello_world.cpp.

To compile this program, go to your HPX build directory (see Getting Started for information on configuring and building HPX) and enter:

make examples.quickstart.hello_world

To run the program type:

./bin/hello_world

This should print:

hello world from OS-thread 0 on locality 0

To use more OS-threads use the command line option --hpx:threads and type the number of threads that you wish to use. For example, typing:

./bin/hello_world --hpx:threads 2

will yield:

hello world from OS-thread 1 on locality 0
hello world from OS-thread 0 on locality 0

Notice how the ordering of the two print statements will change with subsequent runs. To run this program on multiple localities please see the section Using PBS.

Walkthrough

Now that you have compiled and run the code, lets look at how the code works, beginning with main():

Here is the main entry point. By using the include 'hpx/hpx_main.hpp' HPX will invoke the plain old C-main() as its first HPX thread.

int main()
{
    // Get a list of all available localities.
    std::vector<hpx::naming::id_type> localities =
        hpx::find_all_localities();

    // Reserve storage space for futures, one for each locality.
    std::vector<hpx::lcos::future<void> > futures;
    futures.reserve(localities.size());

    for (hpx::naming::id_type const& node : localities)
    {
        // Asynchronously start a new task. The task is encapsulated in a
        // future, which we can query to determine if the task has
        // completed.
        typedef hello_world_foreman_action action_type;
        futures.push_back(hpx::async<action_type>(node));
    }

    // The non-callback version of hpx::lcos::wait_all takes a single parameter,
    // a vector of futures to wait on. hpx::wait_all only returns when
    // all of the futures have finished.
    hpx::wait_all(futures);
    return 0;
}

In this excerpt of the code we again see the use of futures. This time the futures are stored in a vector so that they can easily be accessed. hpx::lcos::wait_all() is a family of functions that wait on for an std::vector<> of futures to become ready. In this piece of code, we are using the synchronous version of hpx::lcos::wait_all(), which takes one argument (the std::vector<> of futures to wait on). This function will not return until all the futures in the vector have been executed.

In the Fibonacci Example, we used hpx::find_here() to specified the target' of our actions. Here, we instead use hpx::find_all_localities(), which returns an std::vector<> containing the identifiers of all the machines in the system, including the one that we are on.

As in the Fibonacci Example our futures are set using hpx::async<>(). The hello_world_foreman_action is declared here:

// Define the boilerplate code necessary for the function 'hello_world_foreman'
// to be invoked as an HPX action.
HPX_PLAIN_ACTION(hello_world_foreman, hello_world_foreman_action);

Another way of thinking about this wrapping technique is as follows: functions (the work to be done) are wrapped in actions, and actions can be executed locally or remotely (e.g. on another machine participating in the computation).

Now it is time to look at the hello_world_foreman() function which was wrapped in the action above:

void hello_world_foreman()
{
    // Get the number of worker OS-threads in use by this locality.
    std::size_t const os_threads = hpx::get_os_thread_count();

    // Find the global name of the current locality.
    hpx::naming::id_type const here = hpx::find_here();

    // Populate a set with the OS-thread numbers of all OS-threads on this
    // locality. When the hello world message has been printed on a particular
    // OS-thread, we will remove it from the set.
    std::set<std::size_t> attendance;
    for (std::size_t os_thread = 0; os_thread < os_threads; ++os_thread)
        attendance.insert(os_thread);

    // As long as there are still elements in the set, we must keep scheduling
    // HPX-threads. Because HPX features work-stealing task schedulers, we have
    // no way of enforcing which worker OS-thread will actually execute
    // each HPX-thread.
    while (!attendance.empty())
    {
        // Each iteration, we create a task for each element in the set of
        // OS-threads that have not said "Hello world". Each of these tasks
        // is encapsulated in a future.
        std::vector<hpx::lcos::future<std::size_t> > futures;
        futures.reserve(attendance.size());

        for (std::size_t worker : attendance)
        {
            // Asynchronously start a new task. The task is encapsulated in a
            // future, which we can query to determine if the task has
            // completed.
            typedef hello_world_worker_action action_type;
            futures.push_back(hpx::async<action_type>(here, worker));
        }

        // Wait for all of the futures to finish. The callback version of the
        // hpx::lcos::wait_each function takes two arguments: a vector of futures,
        // and a binary callback.  The callback takes two arguments; the first
        // is the index of the future in the vector, and the second is the
        // return value of the future. hpx::lcos::wait_each doesn't return until
        // all the futures in the vector have returned.
        hpx::lcos::local::spinlock mtx;
        hpx::lcos::wait_each(
            hpx::util::unwrapped([&](std::size_t t) {
                if (std::size_t(-1) != t)
                {
                    boost::lock_guard<hpx::lcos::local::spinlock> lk(mtx);
                    attendance.erase(t);
                }
            }),
            futures);
    }
}

Now, before we discuss hello_world_foreman(), let's talk about the hpx::lcos::wait_each() function. hpx::lcos::wait_each() provides a way to make sure that all of the futures have finished being calculated without having to call hpx::future::get() for each one. The version of hpx::lcos::wait_each() used here performs a non-blocking wait, which acts on an std::vector<>. It queries the state of the futures, waiting for them to finish. Whenever a future becomes marked as ready, hpx::lcos::wait_each() invokes a callback function provided by the user, supplying the callback function with the result of the future.

In hello_world_foreman(), an std::set<> called attendance keeps track of which OS-threads have printed out the hello world message. When the OS-thread prints out the statement, the future is marked as ready, and hpx::lcos::wait_each() invokes the callback function, in this case a C++11 lambda. This lambda erases the OS-threads id from the set attendance, thus letting hello_world_foreman() know which OS-threads still need to print out hello world. However, if the future returns a value of -1, the future executed on an OS-thread which has already printed out hello world. In this case, we have to try again by rescheduling the future in the next round. We do this by leaving the OS-thread id in attendance.

Finally, let us look at hello_world_worker(). Here, hello_world_worker() checks to see if it is on the target OS-thread. If it is executing on the correct OS-thread, it prints out the hello world message and returns the OS-thread id to hpx::lcos::wait_each() in hello_world_foreman(). If it is not executing on the correct OS-thread, it returns a value of -1, which causes hello_world_foreman() to leave the OS-thread id in attendance.

std::size_t hello_world_worker(std::size_t desired)
{
    // Returns the OS-thread number of the worker that is running this
    // HPX-thread.
    std::size_t current = hpx::get_worker_thread_num();
    if (current == desired)
    {
        // The HPX-thread has been run on the desired OS-thread.
        char const* msg = "hello world from OS-thread %1% on locality %2%";

        hpx::cout << (boost::format(msg) % desired % hpx::get_locality_id())
                  << std::endl << hpx::flush;

        return desired;
    }

    // This HPX-thread has been run by the wrong OS-thread, make the foreman
    // try again by rescheduling it.
    return std::size_t(-1);
}

// Define the boilerplate code necessary for the function 'hello_world_worker'
// to be invoked as an HPX action (by a HPX future). This macro defines the
// type 'hello_world_worker_action'.
HPX_PLAIN_ACTION(hello_world_worker, hello_world_worker_action);

Because HPX features work stealing task schedulers, there is no way to guarantee that an action will be scheduled on a particular OS-thread. This is why we must use a guess-and-check approach.

The accumulator example demonstrates the use of components. Components are C++ classes that expose methods as a type of HPX action. These actions are called component actions.

Components are globally named, meaning that a component action can be called remotely (e.g. from another machine). There are two accumulator examples in HPX; accumulator.

In the Fibonacci Example and the Hello World Example, we introduced plain actions, which wrapped global functions. The target of a plain action is an identifier which refers to a particular machine involved in the computation. For plain actions, the target is the machine where the action will be executed.

Component actions, however, do not target machines. Instead, they target component instances. The instance may live on the machine that we've invoked the component action from, or it may live on another machine.

The component in this example exposes three different functions:

  • reset() - Resets the accumulator value to 0.
  • add(arg) - Adds arg to the accumulators value.
  • query() - Queries the value of the accumulator.

This example creates an instance of the accumulator, and then allows the user to enter commands at a prompt, which subsequently invoke actions on the accumulator instance.

Setup

The source code for this example can be found here: accumulator_client.cpp.

To compile this program, go to your HPX build directory (see Getting Started for information on configuring and building HPX) and enter:

make examples.accumulator.accumulator

To run the program type:

./bin/accumulator_client

Once the program starts running, it will print the following prompt and then wait for input. An example session is given below:

commands: reset, add [amount], query, help, quit
> add 5
> add 10
> query
15
> add 2
> query
17
> reset
> add 1
> query
1
> quit
Walkthrough

Now, let's take a look at the source code of the accumulator example. This example consists of two parts: an HPX component library (a library that exposes an HPX component) and a client application which uses the library. This walkthrough will cover the HPX component library. The code for the client application can be found here: accumulator_client.cpp.

An HPX component is represented by two C++ classes:

  • A server class - The implementation of the components functionality.
  • A client class - A high-level interface that acts as a proxy for an instance of the component.

Typically, these two classes all have the same name, but the server class usually lives in different sub-namespaces (server). For example, the full names of the two classes in accumulator are:

  • examples::server::accumulator (server class)
  • examples::accumulator (client class)
The Server Class

The following code is from: server/accumulator.hpp.

All HPX component server classes must inherit publicly from the HPX component base class: hpx::components::component_base<>

The accumulator component inherits from hpx::components::locking_hock<>. This allows the runtime system to ensure that all action invocations are serialized. That means that the system ensures that no two actions are invoked at the same time on a given component instance. This makes the component thread safe and no additional locking has to be implemented by the user. Moreover, accumulator component is a component, because it also inherits from hpx::components::component_base<> (the template argument passed to locking_hook is used as its base class). The following snippet shows the corresponding code:

class accumulator
  : public hpx::components::locking_hook<
        hpx::components::component_base<accumulator> >

Our accumulator class will need a data member to store its value in, so let's declare a data member:

argument_type value_;

The constructor for this class simply initializes value_ to 0:

accumulator() : value_(0) {}

Next, let's look at the three methods of this component that we will be exposing as component actions:

/// Reset the components value to 0.
void reset()
{
    //  set value_ to 0.
    value_ = 0;
}

/// Add the given number to the accumulator.
void add(argument_type arg)
{
    //  add value_ to arg, and store the result in value_.
    value_ += arg;
}

/// Return the current value to the caller.
argument_type query() const
{
    // Get the value of value_.
    return value_;
}

Here are the action types. These types wrap the methods we're exposing. The wrapping technique is very similar to the one used in the Fibonacci Example and the Hello World Example:

HPX_DEFINE_COMPONENT_ACTION(accumulator, reset);
HPX_DEFINE_COMPONENT_ACTION(accumulator, add);
HPX_DEFINE_COMPONENT_ACTION(accumulator, query);

The last piece of code in the server class header is the declaration of the action type registration code:

HPX_REGISTER_ACTION_DECLARATION(
    examples::server::accumulator::reset_action,
    accumulator_reset_action);

HPX_REGISTER_ACTION_DECLARATION(
    examples::server::accumulator::add_action,
    accumulator_add_action);

HPX_REGISTER_ACTION_DECLARATION(
    examples::server::accumulator::query_action,
    accumulator_query_action);
[Note]Note

The code above must be placed in the global namespace.

The rest of the registration code is in accumulator.cpp.

///////////////////////////////////////////////////////////////////////////////
// Add factory registration functionality.
HPX_REGISTER_COMPONENT_MODULE();

///////////////////////////////////////////////////////////////////////////////
typedef hpx::components::component<
    examples::server::accumulator
> accumulator_type;

HPX_REGISTER_COMPONENT(accumulator_type, accumulator);

///////////////////////////////////////////////////////////////////////////////
// Serialization support for accumulator actions.
HPX_REGISTER_ACTION(
    accumulator_type::wrapped_type::reset_action,
    accumulator_reset_action);
HPX_REGISTER_ACTION(
    accumulator_type::wrapped_type::add_action,
    accumulator_add_action);
HPX_REGISTER_ACTION(
    accumulator_type::wrapped_type::query_action,
    accumulator_query_action);
[Note]Note

The code above must be placed in the global namespace.

The Client Class

The following code is from accumulator.hpp.

The client class is the primary interface to a component instance. Client classes are used to create components:

// Create a component on this locality.
examples::accumulator c = hpx::new_<examples::accumulator>(hpx::find_here());

and to invoke component actions:

c.add_sync(4);

Clients, like servers, need to inherit from a base class, this time, hpx::components::client_base<>:

class accumulator
  : public hpx::components::client_base<
        accumulator, server::accumulator
    >

For readability, we typedef the base class like so:

typedef hpx::components::client_base<
    accumulator, server::accumulator
> base_type;

Here are examples of how to expose actions through a client class:

There are a few different ways of invoking actions:

  • Non-blocking: For actions which don't have return types, or when we do not care about the result of an action, we can invoke the action using fire-and-forget semantics. This means that once we have asked HPX to compute the action, we forget about it completely and continue with our computation. We use hpx::apply<>() instead of hpx::async<>() to invoke an action in a non-blocking fashion.
void reset_non_blocking()
{
    HPX_ASSERT(this->get_id());

    typedef server::accumulator::reset_action action_type;
    hpx::apply<action_type>(this->get_id());
}
  • Asynchronous: Futures, as demonstrated in Fibonacci Example and the Hello World Example, enable asynchronous action invocation. Here's an example from the accumulator client class:
hpx::future<argument_type> query_async()
{
    HPX_ASSERT(this->get_id());

    typedef server::accumulator::query_action action_type;
    return hpx::async<action_type>(this->get_id());
}
  • Synchronous: To invoke an action in a fully synchronous manner, we can simply call hpx::async<>().get() (e.g., create a future and immediately wait on it to be ready). Here's an example from the accumulator client class:
void add_sync(argument_type arg)
{
    HPX_ASSERT(this->get_id());

    typedef server::accumulator::add_action action_type;
    hpx::async<action_type>(this->get_id(), arg).get();
}

Note that this->get_id() references a data member of the hpx::components::client_base<> base class which identifies the server accumulator instance.

hpx::id_type is a type which represents a global identifier in HPX. This type specifies the target of an action. This is the type that is returned by hpx::find_here() in which case it represents the locality the code is running on.

HPX provides its users with several different tools to simply express parallel concepts. One of these tools is a local control object (LCO) called dataflow. An LCO is a type of component that can spawn a new thread when triggered. They are also distinguished from other components by a standard interface which allow users to understand and use them easily. Dataflows, being a LCO, is triggered when the values it depends on become available. For instance, if you have a calculation X that depends on the result of three other calculations, you could set up a dataflow that would begin the calculation X as soon as the other three calculations have returned their values. Dataflows are set up to depend on other dataflows. It is this property that makes dataflow a powerful parallelization tool. If you understand the dependencies of your calculation, you can devise a simple algorithm which sets up a dependency tree to be executed. In this example, we calculate compound interest. To calculate compound interest, one must calculate the interest made in each compound period, and then add that interest back to the principal before calculating the interest made in the next period. A practical person would of course use the formula for compound interest:

F = P(1 + i) ^ n
where:
    F= Future value
    P= Principal
    i= Interest rate
    n= number of compound periods

Nevertheless, we have chosen for the sake of example to manually calculate the future value by iterating:

I = P * i
 and
P = P + I
Setup

The source code for this example can be found here: interest_calculator.cpp.

To compile this program, go to your HPX build directory (see Getting Started for information on configuring and building HPX) and enter:

make examples.quickstart.interest_calculator

To run the program type:

./bin/interest_calculator --principal 100 --rate 5 --cp 6 --time 36

This should print:

Final amount: 134.01
Amount made: 34.0096
Walkthrough

Let us begin with main, here we can see that we again are using Boost.Program Options to set our command line variables (see Fibonacci Example for more details). These options set the principal, rate, compound period, and time. It is important to note that the units of time for cp and time must be the same.

int main(int argc, char ** argv)
{
    options_description cmdline("Usage: " HPX_APPLICATION_STRING " [options]");

    cmdline.add_options()
        ("principal", value<double>()->default_value(1000), "The principal [$]")
        ("rate", value<double>()->default_value(7), "The interest rate [%]")
        ("cp", value<int>()->default_value(12), "The compound period [months]")
        ("time", value<int>()->default_value(12*30),
            "The time money is invested [months]")
    ;

    return hpx::init(cmdline, argc, argv);
}

Next we look at hpx_main.

int hpx_main(variables_map & vm)
{
    {
        using hpx::shared_future;
        using hpx::make_ready_future;
        using hpx::dataflow;
        using hpx::util::unwrapped;
        hpx::naming::id_type here = hpx::find_here();

        double init_principal=vm["principal"].as<double>(); //Initial principal
        double init_rate=vm["rate"].as<double>(); //Interest rate
        int cp=vm["cp"].as<int>(); //Length of a compound period
        int t=vm["time"].as<int>(); //Length of time money is invested

        init_rate/=100; //Rate is a % and must be converted
        t/=cp; //Determine how many times to iterate interest calculation:
               //How many full compund periods can fit in the time invested

        // In non-dataflow terms the implemented algorithm would look like:
        //
        // int t = 5;    // number of time periods to use
        // double principal = init_principal;
        // double rate = init_rate;
        //
        // for (int i = 0; i < t; ++i)
        // {
        //     double interest = calc(principal, rate);
        //     principal = add(principal, interest);
        // }
        //
        // Please note the similarity with the code below!

        shared_future<double> principal = make_ready_future(init_principal);
        shared_future<double> rate = make_ready_future(init_rate);

        for (int i = 0; i < t; ++i)
        {
            shared_future<double> interest = dataflow(unwrapped(calc), principal, rate);
            principal = dataflow(unwrapped(add), principal, interest);
        }

        // wait for the dataflow execution graph to be finished calculating our
        // overall interest
        double result = principal.get();

        std::cout << "Final amount: " << result << std::endl;
        std::cout << "Amount made: " << result-init_principal << std::endl;
    }

    return hpx::finalize();
}

Here we find our command line variables read in, the rate is converted from a percent to a decimal, the number of calculation iterations is determined, and then our shared_futures are set up. Notice that we first place our principal and rate into shares futures by passing the variables init_principal and init_rate using hpx::make_ready_future.

In this way hpx::shared_future<double> principal and rate will be initialized to init_principal and init_rate when hpx::make_ready_future<double> returns a future containing those inital values. These shared futures then enter the for loop and are passed to interest. Next principal and interest are passed to the reassignment of principal using a dataflow. A dataflow will first wait for it's arguments to be ready before launching any callbacks, so add in this case will not begin until both principal and interest are ready. This loop continues for each compound period that must be calculated. To see how interest and principal are calculated in the loop let us look at calc_action and add_action:

// Calculate interest for one period
double calc(double principal, double rate)
{
    return principal * rate;
}

///////////////////////////////////////////////////////////////////////////////
// Add the amount made to the principal
double add(double principal, double interest)
{
    return principal + interest;
}

After the shared future dependencies have been defined in hpx_main, we see the following statement:

double result = principal.get();

This statement calls hpx::future::get() on the shared future prinipcal which had it's value calculated by our for loop. The program will wait here until the entire dataflow tree has been calculated and the value assigned to result. The program then prints out the final value of the investment and the amount of interest made by subtracting the final value of the investment from the initial value of the investment.

When developers write code they typically begin with a simple serial code and build upon it until all of the required functionality is present. The following set of examples were developed to demonstrate this iterative process of evolving a simple serial program to an efficient, fully distributed HPX application. For this demonstration, we implemented a 1D heat distribution problem. This calculation simulates the diffusion of heat across a ring from an initialized state to some user defined point in the future. It does this by breaking each portion of the ring into discrete segments and using the current segment's temperature and the temperature of the surrounding segments to calculate the temperature of the current segment in the next timestep as shown by the figure below.

Figure 2. Heat Diffusion Example Program Flow

Heat Diffusion Example Program Flow


We parallelize this code over the following eight examples:

The first example is straight serial code. In this code we instantiate a vector U which contains two vectors of doubles as seen in the structure stepper.

struct stepper
{
    // Our partition type
    typedef double partition;

    // Our data for one time step
    typedef std::vector<partition> space;

    // Our operator
    static double heat(double left, double middle, double right)
    {
        return middle + (k*dt/(dx*dx)) * (left - 2*middle + right);
    }

    // do all the work on 'nx' data points for 'nt' time steps
    space do_work(std::size_t nx, std::size_t nt)
    {
        // U[t][i] is the state of position i at time t.
        std::vector<space> U(2);
        for (space& s : U)
            s.resize(nx);

        // Initial conditions: f(0, i) = i
        for (std::size_t i = 0; i != nx; ++i)
            U[0][i] = double(i);

        // Actual time step loop
        for (std::size_t t = 0; t != nt; ++t)
        {
            space const& current = U[t % 2];
            space& next = U[(t + 1) % 2];

            next[0] = heat(current[nx-1], current[0], current[1]);

            for (std::size_t i = 1; i != nx-1; ++i)
                next[i] = heat(current[i-1], current[i], current[i+1]);

            next[nx-1] = heat(current[nx-2], current[nx-1], current[0]);
        }

        // Return the solution at time-step 'nt'.
        return U[nt % 2];
    }
};

Each element in the vector of doubles represents a single grid point. To calculate the change in heat distribution, the temperature of each grid point, along with its neighbors, are passed to the function heat. In order to improve readability, references named current and next are created which, depending on the time step, point to the first and second vector of doubles. The first vector of doubles is initialized with a simple heat ramp. After calling the heat function with the data in the "current" vector, the results are placed into the "next" vector.

In example 2 we employ a technique called futurization. Futurization is a method by which we can easily transform a code which is serially executed into a code which creates asynchronous threads. In the simplest case this involves replacing a variable with a future to a variable, a function with a future to a function, and adding a .get() at the point where a value is actually needed. The code below shows how this technique was applied to the struct stepper.

struct stepper
{
    // Our partition type
    typedef hpx::shared_future<double> partition;

    // Our data for one time step
    typedef std::vector<partition> space;

    // Our operator
    static double heat(double left, double middle, double right)
    {
        return middle + (k*dt/(dx*dx)) * (left - 2*middle + right);
    }

    // do all the work on 'nx' data points for 'nt' time steps
    hpx::future<space> do_work(std::size_t nx, std::size_t nt)
    {
        using hpx::dataflow;
        using hpx::util::unwrapped;

        // U[t][i] is the state of position i at time t.
        std::vector<space> U(2);
        for (space& s : U)
            s.resize(nx);

        // Initial conditions: f(0, i) = i
        for (std::size_t i = 0; i != nx; ++i)
            U[0][i] = hpx::make_ready_future(double(i));

        auto Op = unwrapped(&stepper::heat);

        // Actual time step loop
        for (std::size_t t = 0; t != nt; ++t)
        {
            space const& current = U[t % 2];
            space& next = U[(t + 1) % 2];

            // WHEN U[t][i-1], U[t][i], and U[t][i+1] have been computed, THEN we
            // can compute U[t+1][i]
            for (std::size_t i = 0; i != nx; ++i)
            {
                next[i] = dataflow(
                        hpx::launch::async, Op,
                        current[idx(i, -1, nx)], current[i], current[idx(i, +1, nx)]
                    );
            }
        }

        // Now the asynchronous computation is running; the above for-loop does not
        // wait on anything. There is no implicit waiting at the end of each timestep;
        // the computation of each U[t][i] will begin when as soon as its dependencies
        // are ready and hardware is available.

        // Return the solution at time-step 'nt'.
        return hpx::when_all(U[nt % 2]);
    }
};

In example 2, we re-define our partition type as a shared_future and, in main, create the object "result" which is a future to a vector of partitions. We use result to represent the last vector in a string of vectors created for each timestep. In order to move to the next timestep, the values of a partition and its neighbors must be passed to heat once the futures that contain them are ready. In HPX, we have an LCO (Local Control Object) named Dataflow which assists the programmer in expressing this dependency. Dataflow allows us to pass the results of a set of futures to a specified function when the futures are ready. Dataflow takes three types of arguments, one which instructs the dataflow on how to perform the function call (async or sync), the function to call (in this case Op), and futures to the arguments that will be passed to the function. When called, dataflow immediately returns a future to the result of the specified function. This allows users to string dataflows together and construct an execution tree.

After the values of the futures in dataflow are ready, the values must be pulled out of the future container to be passed to the function heat. In order to do this, we use the HPX facility unwrapped, which underneath calls .get() on each of the futures so that the function heat will be passed doubles and not futures to doubles.

By setting up the algorithm this way, the program will be able to execute as quickly as the dependencies of each future are met. Unfortunately, this example runs terribly slow. This increase in execution time is caused by the overheads needed to create a future for each data point. Because the work done within each call to heat is very small, the overhead of creating and scheduling each of the three futures is greater than that of the actual useful work! In order to amortize the overheads of our synchronization techniques, we need to be able to control the amount of work that will be done with each future. We call this amount of work per overhead grain size.

In example 3, we return to our serial code to figure out how to control the grain size of our program. The strategy that we employ is to create "partitions" of data points. The user can define how many partitions are created and how many data points are contained in each partition. This is accomplished by creating the struct partition which contains a member object data_, a vector of doubles which holds the data points assigned to a particular instance of partition.

In example 4, we take advantage of the partition setup by redefining space to be a vector of shared_futures with each future representing a partition. In this manner, each future represents several data points. Because the user can define how many data points are contained in each partition (and therefore how many data points that are represented by one future) a user can now control the grainsize of the simulation. The rest of the code was then futurized in the same manner that was done in example 2. It should be noted how strikingly similar example 4 is to example 2.

Example 4 finally shows good results. This code scales equivalently to the OpenMP version. While these results are promising, there are more opportunities to improve the application's scalability. Currently this code only runs on one locality, but to get the full benefit of HPX we need to be able to distribute the work to other machines in a cluster. We begin to add this functionality in example 5.

In order to run on a distributed system, a large amount of boilerplate code must be added. Fortunately, HPX provides us with the concept of a "component" which saves us from having to write quite as much code. A component is an object which can be remotely accessed using its global address. Components are made of two parts: a server and a client class. While the client class is not required, abstracting the server behind a client allows us to ensure type safety instead of having to pass around pointers to global objects. Example 5 renames example 4's struct partition to partition_data and adds serialization support. Next we add the server side representation of the data in the structure partition_server. Partition_server inherits from hpx::components::simple_component_base which contains a server side component boilerplate. The boilerplate code allows a component's public members to be accessible anywhere on the machine via its Global Identifier (GID). To encapsulate the component, we create a client side helper class. This object allows us to create new instances of our component, and access its members without having to know its GID. In addition, we are using the client class to assist us with managing our asynchrony. For example, our client class partition's member function get_data() returns a future to partition_data get_data(). This struct inherits its boilerplate code from hpx::components::client_base.

In the structure stepper, we have also had to make some changes to accommodate a distributed environment. In order to get the data from a neighboring partition, which could be remote, we must retrieve the data from the neighboring partitions. These retrievals are asynchronous and the function heat_part_data, which amongst other things calls heat, should not be called unless the data from the neighboring partitions have arrived. Therefore it should come as no surprise that we synchronize this operation with another instance of dataflow (found in heat_part). This dataflow is passed futures to the data in the current and surrounding partitions by calling get_data() on each respective partition. When these futures are ready dataflow passes then to the unwrapped function, which extracts the shared_array of doubles and passes them to the lambda. The lambda calls heat_part_data on the locality which the middle partition is on.

Although this example could run in distributed, it only runs on one locality as it always uses hpx::find_here() as the target for the functions to run on.

In example 6, we begin to distribute the partition data on different nodes. This is accomplished in stepper::do_work() by passing the GID of the locality where we wish to create the partition to the the partition constructor.

for (std::size_t i = 0; i != np; ++i)
    U[0][i] = partition(localities[locidx(i, np, nl)], nx, double(i));

We distribute the partitions evenly based on the number of localities used, which is described in the function locidx. Because some of the data needed to update the partition in heat_part could now be on a new locality, we must devise a way of moving data to the locality of the middle partition. We accomplished this by adding a switch in the function get_data() which returns the end element of the buffer data_ if it is from the left partition or the first element of the buffer if the data is from the right partition. In this way only the necessary elements, not the whole buffer, are exchanged between nodes. The reader should be reminded that this exchange of end elements occurs in the function get_data() and therefore is executed asynchronously.

Now that we have the code running in distributed, it is time to make some optimizations. The function heat_part spends most of its time on two tasks: retrieving remote data and working on the data in the middle partition. Because we know that the data for the middle partition is local, we can overlap the work on the middle partition with that of the possibly remote call of get_data(). This algorithmic change which was implemented in example 7 can be seen below:

// The partitioned operator, it invokes the heat operator above on all elements
// of a partition.
static partition heat_part(partition const& left,
    partition const& middle, partition const& right)
{
    using hpx::dataflow;
    using hpx::util::unwrapped;

    hpx::shared_future<partition_data> middle_data =
        middle.get_data(partition_server::middle_partition);

    hpx::future<partition_data> next_middle = middle_data.then(
        unwrapped(
            [middle](partition_data const& m) -> partition_data
            {
                // All local operations are performed once the middle data of
                // the previous time step becomes available.
                std::size_t size = m.size();
                partition_data next(size);
                for (std::size_t i = 1; i != size-1; ++i)
                    next[i] = heat(m[i-1], m[i], m[i+1]);
                return next;
            }
        )
    );

    return dataflow(
        hpx::launch::async,
        unwrapped(
            [left, middle, right](partition_data next, partition_data const& l,
                partition_data const& m, partition_data const& r) -> partition
            {
                // Calculate the missing boundary elements once the
                // corresponding data has become available.
                std::size_t size = m.size();
                next[0] = heat(l[size-1], m[0], m[1]);
                next[size-1] = heat(m[size-2], m[size-1], r[0]);

                // The new partition_data will be allocated on the same locality
                // as 'middle'.
                return partition(middle.get_id(), next);
            }
        ),
        std::move(next_middle),
        left.get_data(partition_server::left_partition),
        middle_data,
        right.get_data(partition_server::right_partition)
    );
}

Example 8 completes the futurization process and utilizes the full potential of HPX by distributing the program flow to multiple localities, usually defined as nodes in a cluster. It accomplishes this task by running an instance of HPX main on each locality. In order to coordinate the execution of the program the struct stepper is wrapped into a component. In this way, each locality contains an instance of stepper which executes its own instance of the function do_work(). This scheme does create an interesting synchronization problem that must be solved. When the program flow was being coordinated on the head node the, GID of each component was known. However, when we distribute the program flow, each partition has no notion of the GID of its neighbor if the next partition is on another locality. In order to make the GIDs of neighboring partitions visible to each other, we created two buffers to store the GIDs of the remote neighboring partitions on the left and right respectively. These buffers are filled by sending the GID of a newly created edge partitions to the right and left buffers of the neighboring localities.

In order to finish the simulation the solution vectors named "result" are then gathered together on locality 0 and added into a vector of spaces overall_result using the HPX functions gather_id and gather_here.

Example 8 completes this example series which takes the serial code of example 1 and incrementally morphs it into a fully distributed parallel code. This evolution was guided by the simple principles of futurization, the knowledge of grainsize, and utilization of components. Applying these techniques easily facilitates the scalable parallelization of most applications.

The HPX Build System
CMake Basics
Build Prerequisites
Installing Boost Libraries
Building HPX
CMake Variables used to configure HPX
CMake Toolchains shipped with HPX
Build recipes
Setting up the HPX Documentation Tool Chain
Building Projects using HPX
Using HPX with pkg-config
Using HPX with CMake based projects
Testing HPX
Running tests manually
Issue Tracker
Buildbot
Launching HPX
Configure HPX Applications
The HPX INI File Format
Built-in Default Configuration Settings
Loading INI Files
Loading Components
Logging
HPX Command Line Options
More Details about HPX Command Line Options
HPX System Components
The HPX I/O-streams Component
Writing HPX applications
Global Names
Applying Actions
Action Type Definition
Action Invocation
Applying an Action Asynchronously without any Synchronization
Applying an Action Asynchronously with Synchronization
Applying an Action Synchronously
Applying an Action with a Continuation but without any Synchronization
Applying an Action with a Continuation and with Synchronization
Action Error Handling
Writing Components
Defining Components
Defining Client Side Representation Classes
Creating Component Instances
Using Component Instances
Using LCOs
Extended Facilities for Futures
High Level Parallel Facilities
Using Parallel Algorithms
Executors and Executor Traits
Executor Parameters and Executor Parameter Traits
Using Task Blocks
Extensions for Task Blocks
Error Handling
Performance Counters
Performance Counter Names
Consuming Performance Counter Data
Consuming Performance Counter Data from the Command Line
Consuming Performance Counter Data using the HPX API
Providing Performance Counter Data
Exposing Performance Counter Data using a Simple Function
Implementing a Full Performance Counter
Existing HPX Performance Counters
HPX Thread Scheduling Policies

The buildsystem for HPX is based on CMake. CMake is a cross-platform build-generator tool. CMake does not build the project, it generates the files needed by your build tool (GNU make, Visual Studio, etc) for building HPX.

This section gives an introduction on how to use our build system to build HPX and how to use HPX in your own projects.

CMake is a cross-platform build-generator tool. CMake does not build the project, it generates the files needed by your build tool (GNU make, Visual Studio, etc) for building HPX.

In general, the HPX CMake scripts try to adhere to the general cmake policies on how to write CMake based projects.

Basic CMake Usage

This section explains basic aspects of CMake, mostly for explaining those options which you may need on your day-to-day usage.

CMake comes with extensive documentation in the form of html files and on the cmake executable itself. Execute cmake --help for further help options.

CMake requires to know for which build tool it shall generate files (GNU make, Visual Studio, Xcode, etc). If not specified on the command line, it tries to guess it based on you environment. Once identified the build tool, CMake uses the corresponding Generator for creating files for your build tool. You can explicitly specify the generator with the command line option -G "Name of the generator". For knowing the available generators on your platform, execute:

cmake --help

This will list the generator names at the end of the help text. Generator names are case-sensitive. Example:

cmake -G "Visual Studio 9 2008" path/to/hpx

For a given development platform there can be more than one adequate generator. If you use Visual Studio "NMake Makefiles" is a generator you can use for building with NMake. By default, CMake chooses the more specific generator supported by your development environment. If you want an alternative generator, you must tell this to CMake with the -G option.

Quick Start

We use here the command-line, non-interactive CMake interface.

  1. Download and install CMake here: CMake Downloads. Version 2.8 is the minimally required version for HPX.
  2. Open a shell. Your development tools must be reachable from this shell through the PATH environment variable.
  3. Create a directory for containing the build. It is not supported to build HPX on the source directory. cd to this directory:

    mkdir mybuilddir
    cd mybuilddir
    
  4. Execute this command on the shell replacing path/to/hpx/ with the path to the root of your HPX source tree:

    cmake path/to/hpx
    

CMake will detect your development environment, perform a series of tests and will generate the files required for building HPX. CMake will use default values for all build parameters. See the CMake Variables used to configure HPX section for fine-tuning your build.

This can fail if CMake can't detect your toolset, or if it thinks that the environment is not sane enough. On this case make sure that the toolset that you intend to use is the only one reachable from the shell and that the shell itself is the correct one for you development environment. CMake will refuse to build MinGW makefiles if you have a POSIX shell reachable through the PATH environment variable, for instance. You can force CMake to use various compilers and tools. Please visit CMake Useful Variables for a detailed overview of specific CMake variables.

Options and Variables

Variables customize how the build will be generated. Options are boolean variables, with possible values ON/OFF. Options and variables are defined on the CMake command line like this:

cmake -DVARIABLE=value path/to/hpx

You can set a variable after the initial CMake invocation for changing its value. You can also undefine a variable:

cmake -UVARIABLE path/to/hpx

Variables are stored on the CMake cache. This is a file named CMakeCache.txt on the root of the build directory. Do not hand-edit it.

Variables are listed here appending its type after a colon. It is correct to write the variable and the type on the CMake command line:

cmake -DVARIABLE:TYPE=value path/to/llvm/source

CMake supports the following variable types: BOOL (options), STRING (arbitrary string), PATH (directory name), FILEPATH (file name).

Supported Platforms

At this time, HPX supports the following platforms. Other platforms may work, but we do not test HPX with other platforms, so please be warned.

Table 3. Supported Platforms for HPX

Name

Recommended Version

Minimum Version

Architectures

Linux

3.2

2.6

x86-32, x86-64, k1om

BlueGeneQ

V1R2M0

V1R2M0

PowerPC A2

Windows

7, Server 2008 R2

Any Windows system

x86-32, x86-64

Mac OSX

Any OSX system

x86-64


Software and Libraries

In the simplest case, HPX depends on one set of libraries: Boost. So, before you read further, please make sure you have a recent version of Boost installed on your target machine. HPX currently requires at least Boost V1.47.0 to work properly. It may build and run with older versions, but we do not test HPX with those versions, so please be warned.

Installing the Boost libraries is described in detail in Boost's own Getting Started document. It is often possible to download the Boost libraries using the package manager of your distribution. Please refer to the corresponding documentation for your system for more information.

The installation of Boost is described in detail in Boost's own Getting Started document. However, if you've never used the Boost libraries (or even if you have), here's a quick primer: Installing Boost Libraries.

In addition, we urge every user to have a recent version of hwloc installed on the target system in order to have proper support for thread pinning and NUMA awareness.

HPX is written in 99.99% Standard C++ (the remaining 0.01% is platform specific assembly code). As such HPX is compilable with almost any standards compliant C++ compiler. A compiler supporting the C++11 Standard is highly recommended. The code base takes advantage of C++11 language features when available (move semantics, rvalue references, magic statics, etc.). This may speed up the execution of your code significantly. We currently support the following C++ compilers: GCC, MSVC, ICPC and clang. For the status of your favorite compiler with HPX visit HPX Buildbot Website.

Table 4. Software Prerequisites for HPX on Linux systems

Name

Recommended Version

Minimum Version

Notes

Compilers

   

GNU Compiler Collection (g++)

4.9 or newer

4.6.4

 

Intel Composer XE Suites

2014 or newer

2013

 

clang: a C language family frontend for LLVM

3.4 or newer

3.3

 

Build System

   

CMake

3.1

2.8.10

 

Required Libraries

   

Boost C++ Libraries

1.57.0 or newer

1.49.0

See below for an important limitation when using Boost V1.54.0.

Portable Hardware Locality (HWLOC)

1.10

1.2 (Xeon Phi: 1.6)

Used for OS-thread pinning and NUMA awareness. This library is optional on Mac OSX.


[Important]Important

Because of a problem in Boost V1.54.0 this version can't be used for compiling HPX if you use gcc V4.6.x. Please use either an earlier or a later version of Boost with this compiler.

[Important]Important

When compiling with the Intel Compiler on Linux systems, we only support C++ Standard Libraries provided by gcc 4.6 and upwards. If the 'g++' in your path is older than 4.6, please specify the path of a newer g++ by setting CMAKE_CXX_FLAGS='-gxx-name=/path/to/g++' via cmake.

[Important]Important

When building Boost using gcc please note that it is always a good idea to specify a cxxflags=-std=c++11 command line argument to b2 (bjam). Note however, that this is absolutely necessary when using gcc V5.2 and above.

Table 5. Software Prerequisites for HPX on Windows systems

Name

Recommended Version

Minimum Version

Notes

Compilers

   

Visual C++ (x64)

2013

2013

 

Build System

   

CMake

3.1

2.8.10

 

Required Libraries

   

Boost

1.57.0 or newer

1.49.0

See below for an important limitation when using Boost V1.55.0.

Portable Hardware Locality (HWLOC)

1.10

1.5

Used for OS-thread pinning and NUMA awareness.


[Note]Note

You need to build the following Boost libraries for HPX: Boost.DateTime, Boost.Filesystem, Boost.ProgramOptions, Boost.Regex, Boost.Serialization, Boost.System, Boost.Thread, Boost.Chrono (starting Boost 1.49.0), and Boost.Atomic (starting Boost 1.53.0).

[Important]Important

Because of a problem in Boost V1.55.0 this version can't be used for compiling HPX if you use MSVC2013. Please use either an earlier or a later version of Boost with this compiler.

Depending on the options you chose while building and installing HPX, you will find that HPX may depend on several other libraries such as those listed below.

[Note]Note

In order to use a high speed parcelport, we currently recommend to configure HPX to use MPI so that MPI can be used for communication between different localities. Please set the CMake Variable MPI_CXX_COMPILER to your MPI C++ Compiler wrapper if not detected automatically.

Table 6. Highly Recommended Optional Software Prerequisites for HPX on Linux systems

Name

Recommended Version

Minimum Version

Notes

google-perftools

1.7.1

1.7.1

Used as a replacement for the system allocator, and for allocation diagnostics.

libunwind

0.99

0.97

Dependency of google-perftools on x86-64, used for stack unwinding.

Open MPI

1.10.1

1.8.0

Can be used as a highspeed communication library backend for the parcelport.


Table 7. Optional Software Prerequisites for HPX on Linux systems

Name

Recommended Version

Minimum Version

Notes

Performance Application Programming Interface (PAPI)

Used for accessing hardware performance data.

jemalloc

2.1.2

2.1.0

Used as a replacement for the system allocator.

Hierarchical Data Format V5 (HDF5)

1.8.7

1.6.7

Used for data I/O in some example applications. See important note below.


Table 8. Optional Software Prerequisites for HPX on Windows systems

Name

Recommended Version

Minimum Version

Notes

Hierarchical Data Format V5 (HDF5)

1.8.7

1.6.7

Used for data I/O in some example applications. See important note below.


[Important]Important

The C++ HDF5 libraries must be compiled with enabled threadsafety support. This has to be explicitly specified while configuring the HDF5 libraries as it is not the default. Additionally, you must set the following environment variables before configuring the HDF5 libraries (this part only needs to be done on Linux):

export CFLAGS='-DHDatexit=""'
export CPPFLAGS='-DHDatexit=""'
[Important]Important

Because of a problem in Boost V1.54.0 this version can't be used for compiling HPX if you use gcc V4.6.x. Please use either an earlier or a later version of Boost with this compiler.

[Important]Important

Because of a problem in Boost V1.55.0 this version can't be used for compiling HPX if you use MSVC2013. Please use either an earlier or a later version of Boost with this compiler.

[Important]Important

When building Boost using gcc please note that it is always a good idea to specify a cxxflags=-std=c++11 command line argument to b2 (bjam). Note however, that this is absolutely necessary when using gcc V5.2 and above.

The easiest way to create a working Boost installation is to compile Boost from sources yourself. This is particularly important as many high performance resources, even if they have Boost installed, usually only provide you with an older version of Boost. We suggest you download the most recent release of the Boost libraries from here: Boost Downloads. Unpack the downloaded archive into a directory of your choosing. We will refer to this directory a $BOOST.

Building and installing the Boost binaries is simple, regardless what platform you are on:

cd $BOOST
bootstrap --prefix=<where to install boost>
./b2 -j<N> --build-type=complete
./b2 install

where: <where to install boost> is the directory the built binaries will be installed to, and <N> is the number of cores to use to build the Boost binaries.

After the above sequence of commands has been executed (this may take a while!) you will need to specify the directory where Boost was installed as BOOST_ROOT (<where to install boost>) while executing cmake for HPX as explained in detail in the sections How to Install HPX on Unix Variants and How to Install HPX on Windows.

[Important]Important

On Windows, depending on the installed versions of Visual Studio, you might also want to pass the correct toolset to the b2 command depending on which version of the IDE you want to use. In addition, passing address-model=64 is highly recommended.

Basic Information

Once CMake has been run, the build process can be started. The HPX build process is highly configurable through CMake and various CMake variables influence the build process. The build process consists of the following parts:

  • The HPX core libraries (target core): This forms the basic set of HPX libraries. The generated targets are:
    • hpx: The core HPX library (always enabled).
    • hpx_init: The HPX initialization library that applications need to link against to define the HPX entry points (disabled for static builds).
    • iostreams_component: The component used for (distributed) IO (always enabled).
    • component_storage_component: The component needed for migration to persistant storage.
    • unordered_component: The component needed for a distributed (partitioned) hash table.
    • partioned_vector_component: The component needed for a distributed (partitioned) vector.
    • memory_component: A dynamically loaded plugin that exposed memory based performance counters (only available on Linux).
    • io_counter_component: A dynamically loaded plugin plugin that exposes I/O performance counters (only available on Linux).
    • papi_component: A dynamically loaded plugin that exposes PAPI performance counters (enabled with HPX_WITH_PAPI, default is Off).
  • HPX Examples (target examples): This target is enabled by default and builds all HPX examples (disable by setting HPX_WITH_BUILD_EXAMPLES=Off). HPX examples are part of the 'all' target and are included in the installation if enabled.
  • HPX Tests (target tests): This target builds the HPX test suite and is enabled by default (disable by setting HPX_WITH_BUILD_TESTS=Off). They are not built by the 'all' target and have to be built separately.
  • HPX Documentation (target docs): This target builds the Documentation, this is not enabled by default. For more information see Setting up the HPX Documentation Tool Chain.

For a complete list of available CMake variables that influence the build of HPX see CMake Variables used to configure HPX.

The variables can be used to refine the recipes that can be found here which show some basic steps on how to build HPX for a specific platform

In order to use HPX, only the core libraries are required (the ones marked as optional above are truly optional). When building against HPX, the CMake variable HPX_LIBRARIES will contain hpx and hpx_init (for pkgconfig, those are added to the Libs sections). In order to use the optional libraries, you need to specify them as link dependencies in your build (See Building Projects using HPX.

As HPX is a modern C++ Library we require a certain minimal set of features from the C++11 standard. In addition, we make use of certain C++14 features if the used compiler supports them. This means that the HPX build system will try to determine the highest support C++ standard flavor and check for availability of those features. That is, the default will be the highest C++ standard version available. If you want to force HPX to use a specific C++ standard version you can use the following CMake variables:

  • HPX_WITH_CXX0X: Enables Pre-C++11 support (This is the minimal required mode on older gcc versions).
  • HPX_WITH_CXX11: Enables C++11 support
  • HPX_WITH_CXX14: Enables C++14 support
  • HPX_WITH_CXX0Y: Enables (experimental) C++17 support
Build Types

CMake can be configured to generate project files suitable for builds that have enabled debugging support or for an optimized build (without debugging support). The CMake variable used to set the build type is CMAKE_BUILD_TYPE (for more information see the CMake Documentation). Available build types are:

  • Debug: Full debug symbols available and additional assertions to help debugging. To enable the debug build type for the HPX API, the C++ Macro HPX_DEBUG is defined.
  • RelWithDebInfo: Release build with debugging symbols. This is most useful for profiling applications
  • Release: Release build. This disables assertions and enables default compiler optimizations.
  • RelMinSize: Release build with optimizations for small binary sizes.
[Important]Important

We currently don't guarantee ABI compatibility between Debug and Release builds. Please make sure that applications built against HPX use the same build type as you used to build HPX. For CMake builds, this means that the CMAKE_BUILD_TYPE variables have to match and for projects not using CMake, the HPX_DEBUG macro has to be set in debug mode.

Platform specific notes

Some Platforms require to have special link and/or compiler flags specified to build HPX. This is handled via CMake's support for different toolchains (see cmake-toolchains(7) for more information). This is also used for cross compilation.

HPX ships with a set of toolchains that can be used for compilation of HPX itself and applications depending on HPX. Please see CMake Toolchains shipped with HPX for more information.

In order to enable full static linking with the __libraries, the CMake variable HPX_WITH_STATIC_LINKING has to be set to On.

In order to configure HPX, you can set a variety of options to allow cmake to generate your specific makefiles/project files.

The options are split into these categories:

Generic Options

HPX_WITH_AUTOMATIC_SERIALIZATION_REGISTRATION:BOOL

Use automatic serialization registration for actions and functions. This affects compatibility between HPX applications compiled with different compilers (default ON)

HPX_WITH_BENCHMARK_SCRIPTS_PATH:PATH

Directory to place batch scripts in

HPX_WITH_COLOCATED_BACKWARDS_COMPATIBILITY:BOOL

Enable backwards compatibility for apply_colocated, async_colocated and friends

HPX_WITH_COMPILER_WARNINGS:BOOL

Enable compiler warnings (default: ON)

HPX_WITH_COMPONENT_GET_GID_COMPATIBILITY:BOOL

Enable backwards compatibility for component::get_gid() functions

HPX_WITH_COMPRESSION_BZIP2:BOOL

Enable bzip2 compression for parcel data (default: OFF).

HPX_WITH_COMPRESSION_SNAPPY:BOOL

Enable snappy compression for parcel data (default: OFF).

HPX_WITH_COMPRESSION_ZLIB:BOOL

Enable zlib compression for parcel data (default: OFF).

HPX_WITH_FORTRAN:BOOL

Enable or disable the compilation of Fortran examples using HPX

HPX_WITH_FULL_RPATH:BOOL

Build and link HPX libraries and executables with full RPATHs (default: ON)

HPX_WITH_GCC_VERSION_CHECK:BOOL

Don't ignore version reported by gcc (default: ON)

HPX_WITH_GENERIC_CONTEXT_COROUTINES:BOOL

Use Boost.Context as the underlying coroutines context switch implementation.

HPX_WITH_HIDDEN_VISIBILITY:BOOL

Use -fvisibility=hidden for builds on platforms which support it (default ON)

HPX_WITH_HWLOC:BOOL

Use Hwloc for hardware topolgy information and thread pinning. If disabled, performance might be reduced.

HPX_WITH_LOCAL_DATAFLOW_COMPATIBILITY:BOOL

Enable backwards compatibility for hpx::lcos::local::dataflow() functions

HPX_WITH_LOGGING:BOOL

Build HPX with logging enabled (default: ON).

HPX_WITH_MALLOC:STRING

Define which allocator should be linked in. Options are: system, tcmalloc, jemalloc, tbbmalloc, and custom (default is: tcmalloc)

HPX_WITH_NATIVE_TLS:BOOL

Use native TLS support if available (default: ON)

HPX_WITH_PARCEL_COALESCING:BOOL

Enable the parcel coalescing plugin (default: ON).

HPX_WITH_RUN_MAIN_EVERYWHERE:BOOL

Run hpx_main by default on all localities (default: OFF).

HPX_WITH_SECURITY:BOOL

Enable security support via libsodium.

HPX_WITH_STATIC_LINKING:BOOL

Compile HPX statically linked libraries (Default: OFF)

Build Targets Options

HPX_WITH_COMPILE_ONLY_TESTS:BOOL

Create build system support for compile time only HPX tests (default ON)

HPX_WITH_DEFAULT_TARGETS:BOOL

Associate the core HPX library with the default build target (default: ON).

HPX_WITH_DOCUMENTATION:BOOL

Build the HPX documentation (default OFF).

HPX_WITH_DOCUMENTATION_SINGLEPAGE:BOOL

The HPX documentation should be build as a single page HTML (default OFF).

HPX_WITH_EXAMPLES:BOOL

Build the HPX examples (default ON)

HPX_WITH_IO_COUNTERS:BOOL

Build HPX runtime (default: ON)

HPX_WITH_PSEUDO_DEPENDENCIES:BOOL

Force creating pseudo targets and pseudo dependencies (default ON).

HPX_WITH_RUNTIME:BOOL

Build HPX runtime (default: ON)

HPX_WITH_TESTS:BOOL

Build the HPX tests (default ON)

HPX_WITH_TESTS_BENCHMARKS:BOOL

Build HPX benchmark tests (default: ON)

HPX_WITH_TESTS_EXTERNAL_BUILD:BOOL

Build external cmake build tests (default: ON)

HPX_WITH_TESTS_HEADERS:BOOL

Build HPX header tests (default: OFF)

HPX_WITH_TESTS_REGRESSIONS:BOOL

Build HPX regression tests (default: ON)

HPX_WITH_TESTS_UNIT:BOOL

Build HPX unit tests (default: ON)

HPX_WITH_TOOLS:BOOL

Build HPX tools (default: OFF)

Thread Manager Options

HPX_WITH_MAX_CPU_COUNT:STRING

HPX applications will not use more that this number of OS-Threads (default: 64)

HPX_WITH_MORE_THAN_64_THREADS:BOOL

HPX applications will be able to run on more than 64 cores

HPX_WITH_SCHEDULER_LOCAL_STORAGE:BOOL

Enable scheduler local storage for all HPX schedulers (default: OFF)

HPX_WITH_STACKTRACES:BOOL

Attach backtraces to HPX exceptions (default: ON)

HPX_WITH_SWAP_CONTEXT_EMULATION:BOOL

Emulate SwapContext API for coroutines (default: OFF)

HPX_WITH_THREAD_BACKTRACE_ON_SUSPENSION:BOOL

Enable thread stack back trace being captured on suspension (default: OFF)

HPX_WITH_THREAD_BACKTRACE_ON_SUSPENSION_DEPTH:STRING

Thread stack back trace depth being captured on suspension (default: 5)

HPX_WITH_THREAD_CREATION_AND_CLEANUP_RATES:BOOL

Enable measuring thread creation and cleanup times (default: OFF)

HPX_WITH_THREAD_CUMULATIVE_COUNTS:BOOL

Enable keeping track of cumulative thread counts in the schedulers (default: ON)

HPX_WITH_THREAD_FULLBACKTRACE_ON_SUSPENSION:BOOL

Enable thread stack back trace being captured on suspension (default: OFF)

HPX_WITH_THREAD_IDLE_RATES:BOOL

Enable measuring the percentage of overhead times spent in the scheduler (default: OFF)

HPX_WITH_THREAD_LOCAL_STORAGE:BOOL

Enable thread local storage for all HPX threads (default: OFF)

HPX_WITH_THREAD_MANAGER_IDLE_BACKOFF:BOOL

HPX scheduler threads are backing off on idle queues (default: ON)

HPX_WITH_THREAD_QUEUE_WAITTIME:BOOL

Enable collecting queue wait times for threads (default: OFF)

HPX_WITH_THREAD_SCHEDULERS:STRING

Which thread schedulers are build. Options are: all, abp-priority, local, static-priority, static, hierarchy, and periodic-priority. For multiple enabled schedulers, separate with a semicolon (default: all)

HPX_WITH_THREAD_STACK_MMAP:BOOL

Use mmap for stack allocation on appropriate platforms

HPX_WITH_THREAD_STEALING_COUNTS:BOOL

Enable keeping track of counts of thread stealing incidents in the schedulers (default: ON)

HPX_WITH_THREAD_TARGET_ADDRESS:BOOL

Enable storing target address in thread for NUMA awareness (default: OFF)

AGAS Options

HPX_WITH_AGAS_DUMP_REFCNT_ENTRIES:BOOL

Enable dumps of the AGAS refcnt tables to logs (default: OFF)

Parcelport Options

HPX_WITH_PARCELPORT_IBVERBS:BOOL

Enable the ibverbs based parcelport. This is currently an experimental feature

HPX_WITH_PARCELPORT_IBVERBS_IFNAME:STRING

The interface name of the ibverbs capable network adapter (default: ib0)

HPX_WITH_PARCELPORT_IBVERBS_MAX_MEMORY_CHUNKS:STRING

Maximum number of chunks that can be allocated (default: 100)

HPX_WITH_PARCELPORT_IBVERBS_MEMORY_CHUNK_SIZE:STRING

Number of bytes a chunk in the memory pool can hold (default: 64MB)

HPX_WITH_PARCELPORT_IBVERBS_MESSAGE_PAYLOAD:STRING

Size of the message payload not sent with RDMA (default: 512 byte)

HPX_WITH_PARCELPORT_IPC:BOOL

Enable the IPC (inter process communication) based parcelport. This is currently an experimental feature

HPX_WITH_PARCELPORT_MPI:BOOL

Enable the MPI based parcelport.

HPX_WITH_PARCELPORT_MPI_ENV:STRING

List of environment variables checked to detect MPI (default: MV2_COMM_WORLD_RANK;PMI_RANK;OMPI_COMM_WORLD_SIZE;ALPS_APP_PE).

HPX_WITH_PARCELPORT_MPI_MULTITHREADED:BOOL

Turn on MPI multithreading support (default: ON).

HPX_WITH_PARCELPORT_TCP:BOOL

Enable the TCP based parcelport.

Profiling Options

HPX_WITH_APEX:BOOL

Enable APEX instrumentation support.

HPX_WITH_GOOGLE_PERFTOOLS:BOOL

Enable Google Perftools instrumentation support.

HPX_WITH_ITTNOTIFY:BOOL

Enable Amplifier (ITT) instrumentation support.

HPX_WITH_PAPI:BOOL

Enable the PAPI based performance counter.

HPX_WITH_TAU:BOOL

Enable TAU profiling support.

Debugging Options

HPX_WITH_THREAD_DEBUG_INFO:BOOL

Enable thread debugging information (default: OFF, implicitly enabled in debug builds)

HPX_WITH_THREAD_GUARD_PAGE:BOOL

Enable thread guard page (default: ON)

HPX_WITH_VALGRIND:BOOL

Enable Valgrind instrumentation support.

HPX_WITH_VERIFY_LOCKS:BOOL

Enable lock verification code (default: OFF, implicitly enabled in debug builds)

HPX_WITH_VERIFY_LOCKS_BACKTRACE:BOOL

Enable thred stack back trace being captured on lock registration (to be used in combination with HPX_WITH_VERIFY_LOCKS=ON, default: OFF)

HPX_WITH_VERIFY_LOCKS_GLOBALLY:BOOL

Enable global lock verification code (default: OFF, implicitly enabled in debug builds)

Here is a list of additional libraries and tools which are either optionally supported by the build system or are optionally required for certain examples or tests. These libraries and tools can be detected by the HPX build system.

Each of the tools or libraries listed here will be automatically detected if they are installed in some standard location. If a tool or library is installed in a different location you can specify its base directory by appending _ROOT to the variable name as listed below. For instance, to configure a custom directory for BOOST, specify BOOST_ROOT=/custom/boost/root.

Additional Tools and Libraries used by HPX

BOOST_ROOT:PATH

Specifies where to look for the Boost installation to be used for compiling HPX. Set this if CMake is not able to locate a suitable version of Boost. The directory specified here can be either the root of a installed Boost distribution or the directory where you unpacked and built Boost without installing it (with staged libraries).

HWLOC_ROOT:PATH

Specifies where to look for the Portable Hardware Locality (HWLOC) library. While it is not necessary to compile HPX with HWLOC, we strongly suggest you do so. HWLOC provides platform independent support for extracting information about the used hardware architecture (number of cores, number of NUMA domains, hyperthreading, etc.). HPX utilizes this information if available.

PAPI_ROOT:PATH

Specifies where to look for the Performance Application Programming Interface (PAPI) library. The PAPI library is necessary to compile a special component exposing PAPI hardware events and counters as HPX performance counters. This is not available on the Windows platform.

AMPLIFIER_ROOT:PATH

Specifies where to look for one of the tools of the Intel Parallel Studio(tm) product, either Intel Amplifier(tm) or Intel Inspector(tm). This should be set if the CMake variable HPX_USE_ITT_NOTIFY is set to ON. Enabling ITT support in HPX will integrate any application with the mentioned Intel tools, which customizes the generated information for your application and improves the generated diagnostics.

SODIUM_ROOT:PATH

Specifies where to look for the Networking and Cryptography library (NaCl) library. The Sodium library is necessary to enable the security related functionality (see HPX_HAVE_SECURITY).

Additional Tools and Libraries Required by some of the Examples

HDF5_ROOT:PATH

Specifies where to look for the Hierarchical Data Format V5 (HDF5) include files and libraries.

In order to compile HPX for various platforms, we provide a variety of Toolchain files that take care of setting up various CMake variables like compilers etc. They are located in the cmake/toolchains directory:

To use them pass the -DCMAKE_TOOLCHAIN_FILE=<toolchain> argument to the cmake invocation.

ARM-gcc
# Copyright (c) 2015 Thomas Heller
#
# Distributed under the Boost Software License, Version 1.0. (See accompanying
# file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
set(CMAKE_SYSTEM_NAME Linux)
set(CMAKE_CROSSCOMPILING ON)
# Set the gcc Compiler
set(CMAKE_CXX_COMPILER arm-linux-gnueabihf-g++-4.8)
set(CMAKE_C_COMPILER arm-linux-gnueabihf-gcc-4.8)
set(HPX_WITH_GENERIC_CONTEXT_COROUTINES ON CACHE BOOL "enable generic coroutines")
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
BGION-gcc
# Copyright (c) 2014 John Biddiscombe
#
# Distributed under the Boost Software License, Version 1.0. (See accompanying
# file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
# This is the default toolchain file to be used with CNK on a BlueGene/Q. It sets
# the appropriate compile flags and compiler such that HPX will compile.
# Note that you still need to provide Boost, hwloc and other utility libraries
# like a custom allocator yourself.
#
# Usage : cmake -DCMAKE_TOOLCHAIN_FILE=~/src/hpx/cmake/toolchains/BGION-gcc.cmake ~/src/hpx
#
set(CMAKE_SYSTEM_NAME Linux)
# Set the gcc Compiler
set(CMAKE_CXX_COMPILER g++)
set(CMAKE_C_COMPILER gcc)
#set(CMAKE_Fortran_COMPILER)
# Add flags we need for BGAS compilation
set(CMAKE_CXX_FLAGS_INIT
  "-D__powerpc__ -D__bgion__ -I/gpfs/bbp.cscs.ch/home/biddisco/src/bgas/rdmahelper "
  CACHE STRING "Initial compiler flags used to compile for BGAS"
)
# the V1R2M2 includes are necessary for some hardware specific features
#-DHPX_SMALL_STACK_SIZE=0x200000 -DHPX_MEDIUM_STACK_SIZE=0x200000 -DHPX_LARGE_STACK_SIZE=0x200000 -DHPX_HUGE_STACK_SIZE=0x200000
set(CMAKE_EXE_LINKER_FLAGS_INIT "-L/gpfs/bbp.cscs.ch/apps/bgas/tools/gcc/gcc-4.8.2/install/lib64 -latomic -lrt" CACHE STRING "BGAS flags")
set(CMAKE_C_FLAGS_INIT "-D__powerpc__ -I/gpfs/bbp.cscs.ch/home/biddisco/src/bgas/rdmahelper" CACHE STRING "BGAS flags")
# We do not perform cross compilation here ...
set(CMAKE_CROSSCOMPILING OFF)
# Set our platform name
set(HPX_PLATFORM "native")
# Disable generic coroutines (and use posix version)
set(HPX_WITH_GENERIC_CONTEXT_COROUTINES OFF CACHE BOOL "diable generic coroutines")
# BGAS nodes support ibverbs
set(HPX_WITH_PARCELPORT_IBVERBS ON CACHE BOOL "")
# Always disable the tcp parcelport as it is nonfunctional on the BGQ.
set(HPX_WITH_PARCELPORT_TCP ON CACHE BOOL "")
# Always enable the tcp parcelport as it is currently the only way to communicate on the BGQ.
set(HPX_WITH_PARCELPORT_MPI ON CACHE BOOL "")
# We have a bunch of cores on the A2 processor ...
set(HPX_WITH_MAX_CPU_COUNT "64" CACHE STRING "")
# We have no custom malloc yet
if(NOT DEFINED HPX_WITH_MALLOC)
  set(HPX_WITH_MALLOC "system" CACHE STRING "")
endif()
set(HPX_HIDDEN_VISIBILITY OFF CACHE BOOL "")
#
# Convenience setup for jb @ bbpbg2.cscs.ch
#
set(BOOST_ROOT "/gpfs/bbp.cscs.ch/home/biddisco/apps/gcc-4.8.2/boost_1_56_0")
set(HWLOC_ROOT "/gpfs/bbp.cscs.ch/home/biddisco/apps/gcc-4.8.2/hwloc-1.8.1")
set(HPX_WITH_HWLOC ON CACHE BOOL "Use hwloc")
set(CMAKE_BUILD_TYPE "Debug" CACHE STRING "Default build")
#
# Testing flags
#
set(BUILD_TESTING                  ON  CACHE BOOL "Testing enabled by default")
set(HPX_WITH_TESTS                ON  CACHE BOOL "Testing enabled by default")
set(HPX_WITH_TESTS_BENCHMARKS     ON  CACHE BOOL "Testing enabled by default")
set(HPX_WITH_TESTS_REGRESSIONS    ON  CACHE BOOL "Testing enabled by default")
set(HPX_WITH_TESTS_UNIT           ON  CACHE BOOL "Testing enabled by default")
set(HPX_WITH_TESTS_EXTERNAL_BUILD OFF CACHE BOOL "Turn off build of cmake build tests")
set(DART_TESTING_TIMEOUT           45  CACHE STRING "Life is too short")
# HPX_WITH_STATIC_LINKING
BGQ
# Copyright (c) 2014 Thomas Heller
#
# Distributed under the Boost Software License, Version 1.0. (See accompanying
# file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#
# This is the default toolchain file to be used with CNK on a BlueGene/Q. It sets
# the appropriate compile flags and compiler such that HPX will compile.
# Note that you still need to provide Boost, hwloc and other utility libraries
# like a custom allocator yourself.
#
set(CMAKE_SYSTEM_NAME Linux)
# Set the Intel Compiler
set(CMAKE_CXX_COMPILER bgclang++11)
set(CMAKE_C_COMPILER bgclang)
#set(CMAKE_Fortran_COMPILER)
set(MPI_CXX_COMPILER mpiclang++11)
set(MPI_C_COMPILER mpiclang)
#set(MPI_Fortran_COMPILER)
set(CMAKE_C_FLAGS_INIT "" CACHE STRING "")
set(CMAKE_C_COMPILE_OBJECT "<CMAKE_C_COMPILER> -fPIC <DEFINES> <FLAGS> -o <OBJECT> -c <SOURCE>" CACHE STRING "")
set(CMAKE_C_LINK_EXECUTABLE "<CMAKE_C_COMPILER> -fPIC -dynamic <FLAGS> <CMAKE_C_LINK_FLAGS> <LINK_FLAGS> <OBJECTS> -o <TARGET> <LINK_LIBRARIES>" CACHE STRING "")
set(CMAKE_C_CREATE_SHARED_LIBRARY "<CMAKE_C_COMPILER> -fPIC -shared <CMAKE_SHARED_LIBRARY_CXX_FLAGS> <LANGUAGE_COMPILE_FLAGS> <LINK_FLAGS> <CMAKE_SHARED_LIBRARY_CREATE_CXX_FLAGS> <SONAME_FLAG><TARGET_SONAME> -o <TARGET> <OBJECTS> <LINK_LIBRARIES> " CACHE STRING "")
set(CMAKE_CXX_FLAGS_INIT "" CACHE STRING "")
set(CMAKE_CXX_COMPILE_OBJECT "<CMAKE_CXX_COMPILER> -fPIC <DEFINES> <FLAGS> -o <OBJECT> -c <SOURCE>" CACHE STRING "")
set(CMAKE_CXX_LINK_EXECUTABLE "<CMAKE_CXX_COMPILER> -fPIC -dynamic <FLAGS> <CMAKE_CXX_LINK_FLAGS> <LINK_FLAGS> <OBJECTS> -o <TARGET> <LINK_LIBRARIES>" CACHE STRING "")
set(CMAKE_CXX_CREATE_SHARED_LIBRARY "<CMAKE_CXX_COMPILER> -fPIC -shared <CMAKE_SHARED_LIBRARY_CXX_FLAGS> <LANGUAGE_COMPILE_FLAGS> <LINK_FLAGS> <CMAKE_SHARED_LIBRARY_CREATE_CXX_FLAGS> <SONAME_FLAG><TARGET_SONAME> -o <TARGET> <OBJECTS> <LINK_LIBRARIES>" CACHE STRING "")
set(CMAKE_Fortran_FLAGS_INIT "" CACHE STRING "")
set(CMAKE_Fortran_COMPILE_OBJECT "<CMAKE_Fortran_COMPILER> -fPIC <DEFINES> <FLAGS> -o <OBJECT> -c <SOURCE>" CACHE STRING "")
set(CMAKE_Fortran_LINK_EXECUTABLE "<CMAKE_Fortran_COMPILER> -fPIC -dynamic <FLAGS> <CMAKE_Fortran_LINK_FLAGS> <LINK_FLAGS> <OBJECTS> -o <TARGET> <LINK_LIBRARIES>")
set(CMAKE_Fortran_CREATE_SHARED_LIBRARY "<CMAKE_Fortran_COMPILER> -fPIC -shared <CMAKE_SHARED_LIBRARY_Fortran_FLAGS> <LANGUAGE_COMPILE_FLAGS> <LINK_FLAGS> <CMAKE_SHARED_LIBRARY_CREATE_Fortran_FLAGS> <SONAME_FLAG><TARGET_SONAME> -o <TARGET> <OBJECTS> <LINK_LIBRARIES> " CACHE STRING "")
# Disable searches in the default system paths. We are cross compiling after all
# and cmake might pick up wrong libraries that way
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM BOTH)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
# We do a cross compilation here ...
set(CMAKE_CROSSCOMPILING ON)
# Set our platform name
set(HPX_PLATFORM "BlueGeneQ")
# Always disable the ibverbs parcelport as it is nonfunctional on the BGQ.
set(HPX_WITH_IBVERBS_PARCELPORT OFF)
# Always disable the tcp parcelport as it is nonfunctional on the BGQ.
set(HPX_WITH_TCP_PARCELPORT OFF)
# Always enable the tcp parcelport as it is currently the only way to communicate on the BGQ.
set(HPX_WITH_MPI_PARCELPORT ON)
# We have a bunch of cores on the BGQ ...
set(HPX_WITH_MAX_CPU_COUNT "64")
# We default to tbbmalloc as our allocator on the MIC
if(NOT DEFINED HPX_WITH_MALLOC)
  set(HPX_WITH_MALLOC "system" CACHE STRING "")
endif()
Cray-Intel
# Copyright (c) 2014 Thomas Heller
#
# Distributed under the Boost Software License, Version 1.0. (See accompanying
# file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#
# This is the default toolchain file to be used with Intel Xeon PHIs. It sets
# the appropriate compile flags and compiler such that HPX will compile.
# Note that you still need to provide Boost, hwloc and other utility libraries
# like a custom allocator yourself.
#
#set(CMAKE_SYSTEM_NAME Cray-CNK-Intel)
if(HPX_WITH_STATIC_LINKING)
  set_property(GLOBAL PROPERTY TARGET_SUPPORTS_SHARED_LIBS FALSE)
else()
endif()
# Set the Intel Compiler
set(CMAKE_CXX_COMPILER CC)
set(CMAKE_C_COMPILER cc)
set(CMAKE_Fortran_COMPILER ftn)
set(CMAKE_C_FLAGS_INIT "" CACHE STRING "")
set(CMAKE_SHARED_LIBRARY_C_FLAGS "-fPIC -shared" CACHE STRING "")
set(CMAKE_SHARED_LIBRARY_CREATE_C_FLAGS "-fPIC -shared" CACHE STRING "")
set(CMAKE_C_COMPILE_OBJECT "<CMAKE_C_COMPILER> -shared -fPIC <DEFINES> <FLAGS> -o <OBJECT> -c <SOURCE>" CACHE STRING "")
set(CMAKE_C_LINK_EXECUTABLE "<CMAKE_C_COMPILER> -fPIC -dynamic <FLAGS> <CMAKE_C_LINK_FLAGS> <LINK_FLAGS> <OBJECTS> -o <TARGET> <LINK_LIBRARIES>" CACHE STRING "")
set(CMAKE_C_CREATE_SHARED_LIBRARY "<CMAKE_C_COMPILER> -fPIC -shared <CMAKE_SHARED_LIBRARY_CXX_FLAGS> <LANGUAGE_COMPILE_FLAGS> <LINK_FLAGS> <CMAKE_SHARED_LIBRARY_CREATE_CXX_FLAGS> <SONAME_FLAG><TARGET_SONAME> -o <TARGET> <OBJECTS> <LINK_LIBRARIES> " CACHE STRING "")
set(CMAKE_CXX_FLAGS_INIT "" CACHE STRING "")
set(CMAKE_SHARED_LIBRARY_CXX_FLAGS "-fPIC -shared" CACHE STRING "")
set(CMAKE_SHARED_LIBRARY_CREATE_CXX_FLAGS "-fPIC -shared" CACHE STRING "")
set(CMAKE_SHARED_LIBRARY_CREATE_CXX_FLAGS "-fPIC -shared" CACHE STRING "")
set(CMAKE_CXX_COMPILE_OBJECT "<CMAKE_CXX_COMPILER> -shared -fPIC <DEFINES> <FLAGS> -o <OBJECT> -c <SOURCE>" CACHE STRING "")
set(CMAKE_CXX_LINK_EXECUTABLE "<CMAKE_CXX_COMPILER> -fPIC -dynamic <FLAGS> <CMAKE_CXX_LINK_FLAGS> <LINK_FLAGS> <OBJECTS> -o <TARGET> <LINK_LIBRARIES>" CACHE STRING "")
set(CMAKE_CXX_CREATE_SHARED_LIBRARY "<CMAKE_CXX_COMPILER> -fPIC -shared <CMAKE_SHARED_LIBRARY_CXX_FLAGS> <LANGUAGE_COMPILE_FLAGS> <LINK_FLAGS> <CMAKE_SHARED_LIBRARY_CREATE_CXX_FLAGS> <SONAME_FLAG><TARGET_SONAME> -o <TARGET> <OBJECTS> <LINK_LIBRARIES>" CACHE STRING "")
set(CMAKE_Fortran_FLAGS_INIT "" CACHE STRING "")
set(CMAKE_SHARED_LIBRARY_Fortran_FLAGS "-fPIC" CACHE STRING "")
set(CMAKE_SHARED_LIBRARY_CREATE_Fortran_FLAGS "-shared" CACHE STRING "")
set(CMAKE_Fortran_COMPILE_OBJECT "<CMAKE_Fortran_COMPILER> -shared -fPIC <DEFINES> <FLAGS> -o <OBJECT> -c <SOURCE>" CACHE STRING "")
set(CMAKE_Fortran_LINK_EXECUTABLE "<CMAKE_Fortran_COMPILER> -fPIC -dynamic <FLAGS> <CMAKE_Fortran_LINK_FLAGS> <LINK_FLAGS> <OBJECTS> -o <TARGET> <LINK_LIBRARIES>")
set(CMAKE_Fortran_CREATE_SHARED_LIBRARY "<CMAKE_Fortran_COMPILER> -fPIC -shared <CMAKE_SHARED_LIBRARY_Fortran_FLAGS> <LANGUAGE_COMPILE_FLAGS> <LINK_FLAGS> <CMAKE_SHARED_LIBRARY_CREATE_Fortran_FLAGS> <SONAME_FLAG><TARGET_SONAME> -o <TARGET> <OBJECTS> <LINK_LIBRARIES> " CACHE STRING "")
# Disable searches in the default system paths. We are cross compiling after all
# and cmake might pick up wrong libraries that way
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM BOTH)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
set(HPX_WITH_PARCELPORT_TCP OFF CACHE BOOL "")
set(HPX_WITH_PARCELPORT_MPI ON CACHE BOOL "")
set(HPX_WITH_PARCELPORT_MPI_MULTITHREADED OFF CACHE BOOL "")
# We default to system as our allocator on the BGQ
if(NOT DEFINED HPX_WITH_MALLOC)
  set(HPX_WITH_MALLOC "system" CACHE STRING "")
endif()
# We do a cross compilation here ...
set(CMAKE_CROSSCOMPILING ON CACHE BOOL "")
XeonPhi
# Copyright (c) 2014 Thomas Heller
#
# Distributed under the Boost Software License, Version 1.0. (See accompanying
# file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#
# This is the default toolchain file to be used with Intel Xeon PHIs. It sets
# the appropriate compile flags and compiler such that HPX will compile.
# Note that you still need to provide Boost, hwloc and other utility libraries
# like a custom allocator yourself.
#
set(CMAKE_SYSTEM_NAME Linux)
# Set the Intel Compiler
set(CMAKE_CXX_COMPILER icpc)
set(CMAKE_C_COMPILER icc)
set(CMAKE_Fortran_COMPILER ifort)
# Add the -mmic compile flag such that everything will be compiled for the correct
# platform
set(CMAKE_CXX_FLAGS_INIT "-mmic" CACHE STRING "Initial compiler flags used to compile for the Xeon Phi")
set(CMAKE_C_FLAGS_INIT "-mmic" CACHE STRING "Initial compiler flags used to compile for the Xeon Phi")
set(CMAKE_Fortran_FLAGS_INIT "-mmic" CACHE STRING "Initial compiler flags used to compile for the Xeon Phi")
# Disable searches in the default system paths. We are cross compiling after all
# and cmake might pick up wrong libraries that way
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM BOTH)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
# We do a cross compilation here ...
set(CMAKE_CROSSCOMPILING ON)
# Set our platform name
set(HPX_PLATFORM "XeonPhi")
# Always disable the ibverbs parcelport as it is nonfunctional on the BGQ.
set(HPX_WITH_PARCELPORT_IBVERBS OFF CACHE BOOL "Enable the ibverbs based parcelport. This is currently an experimental feature")
# We have a bunch of cores on the MIC ... increase the default
set(HPX_WITH_MAX_CPU_COUNT "256" CACHE STRING "")
# We default to tbbmalloc as our allocator on the MIC
if(NOT DEFINED HPX_WITH_MALLOC)
  set(HPX_WITH_MALLOC "tbbmalloc" CACHE STRING "")
endif()
# Set the TBBMALLOC_PLATFORM correctly so that find_package(TBBMalloc) sets the
# right hints
set(TBBMALLOC_PLATFORM "mic" CACHE STRING "")
set(HPX_HIDDEN_VISIBILITY OFF CACHE BOOL "Use -fvisibility=hidden for builds on platforms which support it")
# RDTSC is available on Xeon/Phis
set(HPX_WITH_RDTSC ON CACHE BOOL "")
  • Create a build directory. HPX requires an out-of-tree build. This means you will be unable to run CMake in the HPX source tree.
cd hpx
mkdir my_hpx_build
cd my_hpx_build
  • Invoke CMake from your build directory, pointing the CMake driver to the root of your HPX source tree.
cmake -DBOOST_ROOT=/root/of/boost/installation \
      -DHWLOC_ROOT=/root/of/hwloc/installation
      [other CMake variable definitions] \
      /path/to/source/tree

for instance:

cmake -DBOOST_ROOT=~/packages/boost -DHWLOC_ROOT=/packages/hwloc -DCMAKE_INSTALL_PREFIX=~/packages/hpx ~/downloads/hpx_0.9.10
  • Invoke GNU make. If you are on a machine with multiple cores, add the -jN flag to your make invocation, where N is the number of parallel processes HPX gets compiled with.
gmake -j4
[Caution]Caution

Compiling and linking HPX needs a considerable amount of memory. It is advisable that approximately 2 GB of memory per parallel process is available.

[Note]Note

Many Linux distributions use make as an alias for gmake

  • To complete the build and install HPX:
gmake install
[Important]Important

These commands will build and install the essential core components of HPX only. In order to build and run the tests, please invoke:

gmake tests

and in order to build (and install) all examples invoke:

cmake -DHPX_BUILD_EXAMPLES=On .
gmake examples
gmake install

For more detailed information about using CMake please refer its documentation and also the section Building HPX with CMake. Please pay special attention to the section about HPX_WITH_MALLOC as this is crucial for getting decent performance.

This section describes how to buildHPX for OS X (Mac).

Build (and install) a recent version of Boost, using Clang and libc++
To build Boost with Clang and make it link to libc++ as standard library,
you'll need to set up either of the following in your `~/user-config.jam`
file:
# user-config.jam (put this file into your home directory)
# ...

using clang
    :
    : "/usr/bin/clang++"
    : <cxxflags>"-std=c++11 -fcolor-diagnostics"
      <linkflags>"-stdlib=libc++ -L/path/to/libcxx/lib"
    ;

(Again, remember to replace /path/to with whatever you used earlier.)

You can then use as build command either:
b2 --build-dir=/tmp/build-boost --layout=versioned toolset=clang install -j4

or

b2 --build-dir=/tmp/build-boost --layout=versioned toolset=clang install -j4

we verifed this using Boost V1.53. If you use a different version, just remember to replace /usr/local/include/boost-1_53 with whatever include prefix you had in your installation.

Build HPX, finally
cd /path/to
git clone https://github.com/STEllAR-GROUP/hpx.git
mkdir build-hpx && cd build-hpx

To build with Clang 3.2, execute:

cmake ../hpx \
    -DCMAKE_CXX_COMPILER=clang++ \
    -DBOOST_INCLUDE_DIR=/usr/local/include/boost-1_53 \
    -DBOOST_LIBRARY_DIR=/usr/local/lib \
    -DBOOST_SUFFIX=-clang-darwin32-mt-1_53 \
make

To build with Clang 3.3 (trunk), execute:

cmake ../hpx \
    -DCMAKE_CXX_COMPILER=clang++ \
    -DBOOST_INCLUDE_DIR=/usr/local/include/boost-1_53 \
    -DBOOST_LIBRARY_DIR=/usr/local/lib \
    -DBOOST_SUFFIX=-clang-darwin33-mt-1_53 \
make

For more detailed information about using CMake please refer its documentation and to the section Building HPX with CMake for.

Alternative Installation method of HPX on OS X (Mac)

Alternatively, you can install a recent version of gcc as well as all required libraries via MacPorts:

  1. Install MacPorts
  2. Install CMake, gcc 4.8, and hwloc:

    sudo port install gcc48
    sudo port install hwloc
    

    You may also want:

    sudo port install cmake
    sudo port install git-core
    
  3. Make this version of gcc your default compiler:

    sudo port install gcc_select
    sudo port select gcc mp-gcc48
    
  4. Build Boost manually (the Boost package of MacPorts is built with Clang, and unfortunately doesn't work with a GCC-build version of HPX):

    wget http://sourceforge.net/projects/boost/files/boost/1.54.0/boost_1_54_0.tar.bz2
    tar xjf boost_1_54_0.tar.bz2
    pushd boost_1_54_0
    export BOOST_ROOT=$HOME/boost_1_54_0
    ./bootstrap.sh --prefix=$BOOST_DIR
    ./b2 -j8
    ./b2 -j8 install
    export DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:$BOOST_ROOT/lib
    popd
    
  5. Build HPX:

    git clone https://github.com/STEllAR-GROUP/hpx.git
    mkdir hpx-build
    pushd hpx-build
    export HPX_ROOT=$HOME/hpx
    cmake -DCMAKE_C_COMPILER=gcc \
        -DCMAKE_CXX_COMPILER=g++ \
        -DCMAKE_FORTRAN_COMPILER=gfortran \
        -DCMAKE_C_FLAGS="-Wno-unused-local-typedefs" \
        -DCMAKE_CXX_FLAGS="-Wno-unused-local-typedefs" \
        -DBOOST_ROOT=$BOOST_ROOT \
        -DHWLOC_ROOT=/opt/local \
        -DCMAKE_INSTALL_PREFIX=$HOME/hpx \
             $(pwd)/../hpx
    make -j8
    make -j8 install
    export DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:$HPX_ROOT/lib/hpx
    popd
    
  6. Note that you need to set BOOST_ROOT, HPX_ROOT, and DYLD_LIBRARY_PATH (for both BOOST_ROOT and HPX_ROOT) every time you configure, build, or run an HPX application.
  7. If you want to use HPX with MPI, you need to enable the MPI parcelport, and also specify the location of the MPI wrapper scripts. This can be done e.g. with the following command:
cmake -DHPX_WITH_PARCELPORT_MPI=ON \
     -DCMAKE_C_COMPILER=gcc \
     -DCMAKE_CXX_COMPILER=g++ \
     -DCMAKE_FORTRAN_COMPILER=gfortran \
     -DMPI_C_COMPILER=openmpicc \
     -DMPI_CXX_COMPILER=openmpic++ \
     -DMPI_FORTRAN_COMPILER=openmpif90 \
     -DCMAKE_C_FLAGS="-Wno-unused-local-typedefs" \
     -DCMAKE_CXX_FLAGS="-Wno-unused-local-typedefs" \
     -DBOOST_ROOT=$BOOST_DIR \
     -DHWLOC_ROOT=/opt/local \
     -DCMAKE_INSTALL_PREFIX=$HOME/hpx
         $(pwd)/../hpx
Installation of Required Prerequisites
Installation of the HPX Library
  • Create a build folder. HPX requires an out-of-tree-build. This means that you will be unable to run CMake in the HPX source folder.
  • Open up the CMake GUI. In the input box labelled "Where is the source code:", enter the full path to the source folder. The source directory is one where the sources were checked out. CMakeLists.txt files in the source directory as well as the subdirectories describe the build to CMake. In addition to this, there are CMake scripts (usually ending in .cmake) stored in a special CMake directory. CMake does not alter any file in the source directory and doesn't add new ones either. In the input box labelled "Where to build the binaries:", enter the full path to the build folder you created before. The build directory is one where all compiler outputs are stored, which includes object files and final executables.
  • Add CMake variable definitions (if any) by clicking the "Add Entry" button. There are two required variables you need to define: BOOST_ROOT and HWLOC_ROOT. These (PATH) variables need to be set to point to the root folder of your Boost and Portable Hardware Locality (HWLOC) installations. It is recommended to set the variable CMAKE_INSTALL_PREFIX as well. This determines where the HPX libraries will be built and installed. If this (PATH) variable is set, it has to refer to the directory where the built HPX files should be installed to.
  • Press the "Configure" button. A window will pop up asking you which compilers to use. Select the Visual Studio 10 (64Bit) compiler (it usually is the default if available). The Visual Studio 2012 (64Bit) and Visual Studio 2013 (64Bit) compilers are supported as well. Note that while it is possible to build HPX for x86, we don't recommend doing so as 32 bit runs are severely restricted by a 32 bit Windows system limitation affecting the number of HPX threads you can create.
  • Press "Configure" again. Repeat this step until the "Generate" button becomes clickable (and until no variable definitions are marked red anymore).
  • Press "Generate".
  • Open up the build folder, and double-click hpx.sln.
  • Build the INSTALL target.

For more detailed information about using CMake please refer its documentation and also the section Building HPX with CMake.

  • Download the CMake V3.4.3 installer (or latest version) from here
  • Download the Portable Hardware Locality (HWLOC) V1.11.0 (or latest version) from here and unpack it.
  • Download Boost libraries V1.60 (or latest version) from here and unpack them.
  • Build the boost DLLs and LIBs by using these commands from Command Line (or PowerShell). Open CMD/PowerShell inside the Boost dir and type in:

    bootstrap.bat
    

    This batch file will set up everything needed to create a successful build. Now execute:

    b2.exe link=shared variant=release,debug architecture=x86 address-model=64 threading=multi --build-type=complete install
    

    This command will start a (very long) build of all available Boost libraries. Please, be patient.

  • Open CMake-GUI.exe and set up your source directory (input field 'Where is the source code') to the base directory of the source code you downloaded from HPX' GitHub pages. Here's an example of my CMake path settings which point to my Documents/GitHub/hpx folder:

    Figure 3. Example CMake path settings

    Example CMake path settings

    Inside the 'Where is the source-code' enter the base directory of your HPX source directory (do not enter the "src" sub-directory!) Inside 'Where to build the binaries' you should put in the path where all the building process will happen. This is important because the building machinery will do an "out-of-tree" build. CMake is not touching or changing in any way the original source files. Instead, it will generate Visual Studio Solution Files which will build HPX packages out of the HPX source tree.

  • Set four new environment variables (in CMake, not in Windows environment, by the way): BOOST_ROOT, HWLOC_ROOT, CMAKE_INSTALL_PREFIX and HPX_WITH_BOOST_ALL_DYNAMIC_LINK

    The meaning of these variables is as follows:

    • BOOST_ROOT: the root directory of the unpacked Boost headers/cpp files.
    • HWLOC_ROOT: the root directory of the unpacked Portable Hardware Locality files.
    • CMAKE_INSTALL_PREFIX: the "root directory" where the future builds of HPX should be installed to.

    [Note]Note

    HPX is a BIG software collection and I really don't recommend using the default C:\Program Files\hpx. I prefer simpler paths without white space, like C:\bin\hpx or D:\bin\hpx etc.

    To insert new env-vars klick on "Add Entrz" and then insert the name inside "Name", select PATH as Type and put the path-name in "Path" text field. Repeat this for the first three variables.

    The last one: HPX_WITH_BOOST_ALL_DYNAMIC_LINK is a BOOL and must be checked (there will be a checkbox instead of a textfield).

    This is how variable insertion looks like:

    Figure 4. Example CMake Adding Entry

    Example CMake Adding Entry

    Alternatively you could provide BOOST_LIBRARYDIR instead of BOOST_ROOT with a difference that BOOST_LIBRARYDIR should point to the sudbirectory inside Boost root where all the compiled DLLs/LIBs are. I myself have used BOOST_LIBRARYDIR which pointed to the bin.v2 subdirectory under the Boost rootdir. Important is to keep the meanings of these two variables separated from each other: BOOST_DIR points to the ROOT folder of the boost library. BOOST_LIBRARYDIR points to the subdir inside Boost root folder where the compiled binaries are.

  • Click the 'Configure' button of CMake-GUI. You will be immediately presented a small window where you can select the C++ compiler to be used within Visual Studio. In my case I have used the latest v14 (a.k.a C++ 2015) but older versions should be sufficient too. Make sure to select the 64Bit compiler
  • After the generate process has finished successfully click the 'Generate' button. Now, CMake will put new VS Solution files into the BUILD folder you selected at the beginning.
  • Open Visual Studio and load the HPX.sln from your build folder.
  • Go to CMakePredefinedTargets and build the INSTALL project:

    Figure 5. Visual Studio INSTALL Target

    Visual Studio INSTALL Target

    It will take some time to compile everything and in the end you should see an output similar to this one:

    Figure 6. Visual Studio Build Output

    Visual Studio Build Output

So far we only support BGClang for compiling HPX on the BlueGene/Q.

  • Check if BGClang is available on your installation. If not obtain and install a copy from the BGClang trac page <https://trac.alcf.anl.gov/projects/llvm-bgq>_
  • Build (and install) a recent version of hwloc With the following commands:
./configure \
  --host=powerpc64-bgq-linux \
  --prefix=$HOME/install/hwloc \
  --disable-shared \
  --enable-static \
  CPPFLAGS='-I/bgsys/drivers/ppcfloor -I/bgsys/drivers/ppcfloor/spi/include/kernel/cnk/'
make
make install
  • Build (and install) a recent version of Boost, using BGClang:: To build Boost with BGClang, you'll need to set up the following in your Boost ~/user-config.jam file:
# user-config.jam (put this file into your home directory)
using clang
  :
  : bgclang++11
  :
  ;

You can then use this as your build command:

./bootstrap.sh
./b2 --build-dir=/tmp/build-boost --layout=versioned toolset=clang -j12
  • Clone the master HPX git repository (or a stable tag):
git clone git://github.com/STEllAR-GROUP/hpx.git
  • Generate the HPX buildfiles using cmake:
cmake -DHPX_PLATFORM=BlueGeneQ \
        -DCMAKE_TOOLCHAIN_FILE=/path/to/hpx/cmake/toolchains/BGQ.cmake \
        -DCMAKE_CXX_COMPILER=bgclang++11 \
        -DMPI_CXX_COMPILER=mpiclang++11 \
        -DHWLOC_ROOT=/path/to/hwloc/installation \
        -DBOOST_ROOT=/path/to/boost \
        -DHPX_WITH_MALLOC=system \
        /path/to/hpx
  • To complete the build and install HPX:
make -j24
make install
This will build and install the essential core components of HPX only. Use:
make -j24 examples
make -j24 install
to build and install the examples.
Installation of the Boost Libraries
  • Download Boost Downloads for Linux and unpack the retreived tarball.
  • Adapt your ~/user-config.jam to contain the following lines:

    ## Toolset to be used for compiling for the host
    using intel
        : host
        :
        : <cxxflags>"-std=c++0x"
        ;
    
    ## Toolset to be used for compiling for the Xeon Phi
    using intel
        : mic
        :
        : <cxxflags>"-std=c++0x -mmic"
          <linkflags>"-std=c++0x -mmic"
        ;
    
  • Change to the directory you unpacked boost in (from now on referred to as $BOOST_ROOT) and execute the following commands:

    ./bootstrap.sh
    ./b2 toolset=intel-mic -j<N>
    

    You should now have all the required boost libraries.

Installation of the hwloc Library
  • Download hwloc, unpack the retreived tarball and change to the newly created directory
  • Run the configure-make-install procedure as follows
CC=icc CFLAGS=-mmic CXX=icpc CXXFLAGS=-mmic LDFLAGS=-mmic ./configure --host=x86_64-k1om-linux --prefix=$HWLOC_ROOT
make
make install
[Important]Important

The minimally required version of the Portable Hardware Locality (HWLOC) library on the Intel Xeon Phi is V1.6.

You now have a working hwloc installation in $HWLOC_ROOT.

Building HPX

After all the prerequistes have been successfully installed, we can now start building and installing HPX. The build procedure is almost the same as for How to Install HPX on Unix Variants with the sole difference that you have to enable the Xeon Phi in the CMake Build system. This is achieved by invoking CMake in the following way:

cmake                                             \
    -DCMAKE_TOOLCHAIN_FILE=/path/to/hpx/cmake/toolchains/XeonPhi.cmake \
    -DBOOST_ROOT=$BOOST_ROOT                      \
    -DHWLOC_ROOT=$HWLOC_ROOT                      \
    /path/to/hpx

For more detailed information about using CMake please refer its documentation and to the section Building HPX with CMake. Please pay special attention to the section about HPX_WITH_MALLOC as this is crucial for getting decent performance on the Xeon Phi.

The documentation for HPX is generated by the Boost QuickBook documentation toolchain. Setting up this toolchain requires to install several tools and libraries. Generating the documentation is possible only if all of those are configured correctly.

CMake Variables needed for the Documentation Toolchain

DOXYGEN_ROOT:PATH

Specifies where to look for the installation fo the Doxygen tool.

BOOSTQUICKBOOK_ROOT:PATH

Specifies where to look for the installation fo the QuickBook tool. This tool usually needs to be built by hand. See the QuickBook documentation for more details on how to do this.

BOOSTAUTOINDEX_ROOT:PATH

Specifies where to look for the installation fo the AutoIndex tool. This tool usually needs to be built by hand. See the AutoIndex documentation for more details on how to do this. The documentation can still be generated even if the AutoIndex tool cannot be found.

XSLTPROC_ROOT:PATH

Specifies where to look for the installation of the libxslt package (and the xsltproc tool). Consult the documentation for your platform on how to make this package available on your machine.

DOCBOOK_DTD_ROOT:PATH

Specifies where to look for the installation of the docbook-xml-4.2 package. This usually needs to refer to the directory containing the file docbook.cat, which is part of this package.

DOCBOOK_XSL_ROOT:PATH

Specifies where to look for the installation of the docbook-xsl package. This usually needs to refer to the directory containing the file catalog.xml, which is part of this package.

After you are done installing HPX, you should be able to build the following program. It prints Hello HPX World! on the locality you run it on.

// Including 'hpx/hpx_main.hpp' instead of the usual 'hpx/hpx_init.hpp' enables
// to use the plain C-main below as the direct main HPX entry point.
#include <hpx/hpx_main.hpp>
#include <hpx/include/iostreams.hpp>

int main()
{
    // Say hello to the world!
    hpx::cout << "Hello World!\n" << hpx::flush;
    return 0;
}

Copy the text of this program into a file called hello_world.cpp.

Now, in the directory where you put hello_world.cpp, issue the following commands (where $HPX_LOCATION is the build directory or CMAKE_INSTALL_PREFIX you used while building HPX):

export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:$HPX_LOCATION/lib/pkgconfig
c++ -o hello_world hello_world.cpp \
    `pkg-config --cflags --libs hpx_application` \
    -lhpx_iostreams -DHPX_APPLICATION_NAME=hello_world
[Important]Important

When using pkg-config with HPX, the pkg-config flags must go after the -o flag.

[Note]Note

HPX libraries have different names in debug and release mode. If you want to link against a debug HPX library, you need to use the _debug suffix for the pkg-config name. That means instead of hpx_application or hpx_component you will have to use hpx_application_debug or hpx_component_debug. Moreover, all referenced HPX components need to have a appended 'd' suffix, e.g. instead of -lhpx_iostreams you will need to specify -lhpx_iostreamsd.

[Important]Important

If the HPX libraries are in a path that is not found by the dynamic linker. You need to add the path $HPX_LOCATION/lib to your linker search path (for example LD_LIBRARY_PATH on Linux)

To test the program, type:

./hello_world

which should print Hello World! and exit.

Let's try a more complex example involving an HPX component. An HPX component is a class which exposes HPX actions. HPX components are compiled into dynamically loaded modules called component libraries. Here's the source code:

hello_world_component.cpp

#include "hello_world_component.hpp"
#include <hpx/include/iostreams.hpp>

#include <iostream>

namespace examples { namespace server
{
    void hello_world::invoke()
    {
        hpx::cout << "Hello HPX World!" << std::endl;
    }
}}

HPX_REGISTER_COMPONENT_MODULE();

typedef hpx::components::component<
    examples::server::hello_world
> hello_world_type;

HPX_REGISTER_COMPONENT(hello_world_type, hello_world);

HPX_REGISTER_ACTION(
    examples::server::hello_world::invoke_action, hello_world_invoke_action);

hello_world_component.hpp

#if !defined(HELLO_WORLD_COMPONENT_HPP)
#define HELLO_WORLD_COMPONENT_HPP

#include <hpx/hpx_fwd.hpp>
#include <hpx/include/actions.hpp>
#include <hpx/include/lcos.hpp>
#include <hpx/include/components.hpp>
#include <hpx/include/serialization.hpp>

namespace examples { namespace server
{
    struct HPX_COMPONENT_EXPORT hello_world
        : hpx::components::component_base<hello_world>
    {
        void invoke();
        HPX_DEFINE_COMPONENT_ACTION(hello_world, invoke);
    };
}}

HPX_REGISTER_ACTION_DECLARATION(
    examples::server::hello_world::invoke_action, hello_world_invoke_action);

namespace examples
{
    struct hello_world
      : hpx::components::client_base<hello_world, server::hello_world>
    {
        typedef hpx::components::client_base<hello_world, server::hello_world>
            base_type;

        hello_world(hpx::future<hpx::id_type> f)
          : base_type(std::move(f))
        {}

        void invoke()
        {
            hpx::async<server::hello_world::invoke_action>(this->get_id()).get();
        }
    };
}

#endif // HELLO_WORLD_COMPONENT_HPP

hello_world_client.cpp

#include "hello_world_component.hpp"
#include <hpx/hpx_init.hpp>

int hpx_main(boost::program_options::variables_map&)
{
    {
        // Create a single instance of the component on this locality.
        examples::hello_world client =
            hpx::new_<examples::hello_world>(hpx::find_here());

        // Invoke the component's action, which will print "Hello World!".
        client.invoke();
    }

    return hpx::finalize(); // Initiate shutdown of the runtime system.
}

int main(int argc, char* argv[])
{
    return hpx::init(argc, argv); // Initialize and run HPX.
}

Copy the three source files above into three files (called hello_world_component.cpp, hello_world_component.hpp and hello_world_client.cpp respectively).

Now, in the directory where you put the files, run the following command to build the component library. (where $HPX_LOCATION is the build directory or CMAKE_INSTALL_PREFIX you used while building HPX):

export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:$HPX_LOCATION/lib/pkgconfig
c++ -o libhpx_hello_world.so hello_world_component.cpp \
    `pkg-config --cflags --libs hpx_component` \
    -lhpx_iostreams -DHPX_COMPONENT_NAME=hpx_hello_world

Now pick a directory in which to install your HPX component libraries. For this example, we'll choose a directory named ''my_hpx_libs''.

mkdir ~/my_hpx_libs
mv libhello_world.so ~/my_hpx_libs
[Note]Note

HPX libraries have different names in debug and release mode. If you want to link against a debug HPX library, you need to use the _debug suffix for the pkg-config name. That means instead of hpx_application or hpx_component you will have to use hpx_application_debug or hpx_component_debug. Moreover, all referenced HPX components need to have a appended 'd' suffix, e.g. instead of -lhpx_iostreams you will need to specify -lhpx_iostreamsd.

[Important]Important

If the HPX libraries are in a path that is not found by the dynamic linker. You need to add the path $HPX_LOCATION/lib to your linker search path (for example LD_LIBRARY_PATH on Linux)

Now, to build the application that uses this component (hello_world_client.cpp), we do:

export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:$HPX_LOCATION/lib/pkgconfig
c++ -o hello_world_client hello_world_client.cpp \
    `pkg-config --cflags --libs hpx_application` \
    -L${HOME}/my_hpx_libs -lhello_world -lhpx_iostreams
[Important]Important

When using pkg-config with HPX, the pkg-config flags must go after the -o flag.

Finally, you'll need to set your LD_LIBRARY_PATH before you can run the program. To run the program, type:

export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$HOME/my_hpx_libs"
./hello_world_client

which should print Hello HPX World! and exit.

In Addition to the pkg-config support discussed on the previous pages, HPX comes with full CMake support. In order to integrate HPX into your existing, or new CMakeLists.txt you can leverage the find_package command integrated into CMake. Following is a Hello World component example using CMake.

Let's revisit what we have. We have three files which compose our example application:

  • hello_world_component.hpp
  • hello_world_component.cpp
  • hello_world_client.hpp

The basic structure to include HPX into your CMakeLists.txt is shown here:

# Require a recent version of cmake
cmake_minimum_required(VERSION 2.8.4 FATAL_ERROR)

# This project is C++ based.
project(your_app CXX)

# Instruct cmake to find the HPX settings
find_package(HPX)

In order to have CMake find HPX, it needs to be told where to look for the HPXConfig.cmake file that is generated when HPX is built or installed, it is used by find_package(HPX) to set up all the necessary macros needed to use HPX in your project. The ways to achieve this are

  • set the HPX_DIR cmake variable to point to the directory containing the HPXConfig.cmake script on the command line when you invoke cmake,

    cmake -DHPX_DIR=$HPX_LOCATION/lib/cmake/HPX ...
    

    where $HPX_LOCATION is the build directory or CMAKE_INSTALL_PREFIX you used when building/configuring HPX

  • set the CMAKE_PREFIX_PATH variable to the root directory of your HPX build or install location on the command line when you invoke cmake,

    cmake -DCMAKE_PREFIX_PATH=$HPX_LOCATION ...
    

    the difference between CMAKE_PREFIX_PATH and HPX_DIR is that cmake will add common postfixes such as lib/cmake/<project> to the CMAKE_PREFIX_PATH and search in these locations too. Note that if your project uses HPX as well as other cmake managed projects, the paths to the locations of these multiple projects may be concatenated in the CMAKE_PREFIX_PATH.

  • The variables above may be set in the CMake GUI or curses ccmake interface instead of the command line.

Additionally, if you wish to require HPX for your project, replace the find_package(HPX) line with find_package(HPX REQUIRED).

You can check if HPX was successfully found with the HPX_FOUND CMake variable.

The simplst way to add the HPX component is to use the add_hpx_component macro and add it to the CMakeLists.txt file:

# build your application using HPX
add_hpx_component(hello_world_component
    SOURCES hello_world_component.cpp
    HEADERS hello_world_component.hpp
    COMPONENT_DEPENDENCIES iostreams)

The available options to add_hpx_component are:

  • SOURCES: The source files for that component
  • HEADERS: The header files for that component
  • DEPENDENCIES: Other libraries or targets this component depends on
  • COMPONENT_DEPENDENCIES: The components this component depends on
  • PLUGIN: Treat this component as a plugin-able library
  • COMPILE_FLAGS: Additional compiler flags
  • LINK_FLAGS: Additional linker flags
  • FOLDER: Add the headers and source files to this Source Group folder
  • EXCLUDE_FROM_ALL: Do not build this component as part of the all target

After adding the component, the way you add the executable is as follows:

# build your application using HPX
add_hpx_executable(hello_world
    ESSENTIAL
    SOURCES hello_world_client.cpp
    COMPONENT_DEPENDENCIES hello_world_component)

When you configure your application, all you need to do is set the HPX_DIR variable to point to the installation of HPX!

[Note]Note

All library targets built with HPX are exported and readily available to be used as arguments to target_link_libraries in your targets. The HPX include directories are available with the HPX_INCLUDE_DIRS CMake variable

CMake macros to integrate HPX into existing Applications

In addition to the add_hpx_component and add_hpx_executable you can use the hpx_setup_target macro to have an already exisiting target to be used with the HPX libraries.

hpx_setup_target(target)

Optional Parameters are:

  • EXPORT: Adds it to the CMake export list HPXTargets
  • INSTALL: Generates a install rule for the target
  • PLUGIN: Treat this component as a plugin-able library
  • TYPE: The type can be: EXECUTABLE, LIBRARY or COMPONENT
  • DEPENDENCIES: Other libraries or targets this component depends on
  • COMPONENT_DEPENDENCIES: The components this component depends on
  • COMPILE_FLAGS: Additional compiler flags
  • LINK_FLAGS: Additional linker flags

If you do not use CMake, you can still build against HPX but you should refer to the section on How to Build HPX Components with pkg-config.

[Note]Note

Since HPX relies on dynamic libraries, the dynamic linker needs to know where to look for them. If HPX isn't installed into a path which is configured as a linker search path, external projects need to either set RPATH or adapt LD_LIBRARY_PATH to point to where the hpx libraries reside. In order to set RPATHs, you can include HPX_SetFullRPATH in your project after all libraries you want to link against have been added. Please also consult the CMake documentation here

To ensure correcteness of HPX, we ship a large variety of unit and regression tests. The tests are driven by the CTest tool and are executed automatically by buildbot (see HPX Buildbot Website) on each commit to the HPX Github repository. In addition, it is encouraged to run the test suite manually to ensure proper operation on your target system. If a test fails for your platform, we highly recommend to submit an issue on our HPX Issues tracker with detailed information about the target system.

Running the tests manually is as easy as typing

make tests

. This will build all tests and run them once the tests are build successfully. After the tests have been built, you can invoke seperate tests with the help of the

ctest

command. Please see the CTest Documentation for further details.

If you stumble over a bug or missing feature missing feature in HPX please submit an issue to our HPX Issues. For more information on how to submit support requests or other means of getting in contact with the developers please see the Support Website.

In addition to manual testing, we run automated tests on various platforms. You can see the status of the current master head by visiting the HPX Buildbot Website.

All HPX applications can be configured using special command line options and/or using special configuration files. This section describes the available options, the configuration file format, and the algorithm used to locate possible predefined configuration files. Additionally this section describes the defaults assumed if no external configuration information is supplied.

During startup any HPX application applies a predefined search pattern to locate one or more configuration files. All found files will be read and merged in the sequence they are found into one single internal database holding all configuration properties. This database is used during the execution of the application to configure different aspects of the runtime system.

In addition to the ini files, any application can supply its own configuration files, which will be merged with the configuration database as well. Moreover, the user can specify additional configuration parameters on the command line when executing an application. The HPX runtime system will merge all command line configuration options (see the description of the --hpx::ini, --hpx:config, and --hpx::app-config command line options).

All HPX applications can be configured using a special file format which is similar to the well known Windows INI file format. This is a structured text format allowing to group key/value pairs (properties) into sections. The basic element contained in an ini file is the property. Every property has a name and a value, delimited by an equals sign ('='). The name appears to the left of the equals sign:

name=value

The value may contain equal signs as only the first '=' character is interpreted as the delimiter between name and value. Whitespace before the name, after the value and immediately before and after the delimiting equal sign is ignored. Whitespace inside the value is retained.

Properties may be grouped into arbitrarily named sections. The section name appears on a line by itself, in square brackets ([ and ]). All properties after the section declaration are associated with that section. There is no explicit "end of section" delimiter; sections end at the next section declaration, or the end of the file

[section]

In HPX sections can be nested. A nested section has a name composed of all section names it is embedded in. The section names are concatenated using a dot ('.'):

[outer_section.inner_section]

Here inner_section is logically nested within outer_section.

It is possible to use the full section name concatenated with the property name to refer to a particular property. For example in:

[a.b.c]
d = e

the property value of d can be referred to as a.b.c.d=e.

In HPX ini files can contain comments. Hash signs ('#') at the beginning of a line indicate a comment. All characters starting with the '#' until the end of line are ignored.

If a property with the same name is reused inside a section, the second occurrence of this property name will override the first occurrence (discard the first value). Duplicate sections simply merge their properties together, as if they occurred contiguously.

In HPX ini files, a property value ${FOO:default} will use the environmental variable FOO to extract the actual value if it is set and default otherwise. No default has to be specified. Therefore ${FOO} refers to the environmental variable FOO. If FOO is not set or empty the overall expression will evaluate to an empty string. A property value $[section.key:default] refers to the value held by the property section.key if it exists and default otherwise. No default has to be specified. Therefore $[section.key] refers to the property section.key. If the property section.key is not set or empty, the overall expression will evaluate to an empty string.

[Note]Note

Any property $[section.key:default] is evaluated whenever it is queried and not when the configuration data is initialized. This allows for lazy evaluation and relaxes initialization order of different sections. The only exception are recursive property values, e.g. values referring to the very key they are associated with. Those property values are evaluated at initialization time to avoid infinite recursion.

During startup any HPX application applies a predefined search pattern to locate one or more configuration files. All found files will be read and merged in the sequence they are found into one single internal data structure holding all configuration properties.

As a first step the internal configuration database is filled with a set of default configuration properties. Those settings are described on a section by section basis below.

[Note]Note

You can print the default configuration settings used for an executable by specifying the command line option --hpx:dump-config.

The system Configuration Section

[system]
pid = <process-id>
prefix = <current prefix path of core HPX library>
executable = <current prefix path of executable>

Property

Description

system.pid

This is initialized to store the current OS-process id of the application instance.

system.prefix

This is initialized to the base directory HPX has been loaded from.

system.executable_prefix

This is initialized to the base directory the current executable has been loaded from.

The hpx Configuration Section

[hpx]
location = ${HPX_LOCATION:$[system.prefix]}
component_path = $[hpx.location]/lib/hpx:$[system.executable_prefix]/lib/hpx:$[system.executable_prefix]/../lib/hpx
master_ini_path = $[hpx.location]/share/hpx-<version>:$[system.executable_prefix]/share/hpx-<version>:$[system.executable_prefix]/../share/hpx-<version>
ini_path = $[hpx.master_ini_path]/ini
os_threads = 1
localities = 1
program_name =
cmd_line =
lock_detection = ${HPX_LOCK_DETECTION:0}
throw_on_held_lock = ${HPX_THROW_ON_HELD_LOCK:1}
minimal_deadlock_detection = <debug>

[hpx.stacks]
small_size = ${HPX_SMALL_STACK_SIZE:<hpx_small_stack_size>}
medium_size = ${HPX_MEDIUM_STACK_SIZE:<hpx_medium_stack_size>}
large_size = ${HPX_LARGE_STACK_SIZE:<hpx_large_stack_size>}
huge_size = ${HPX_HUGE_STACK_SIZE:<hpx_huge_stack_size>}
use_guard_pages = ${HPX_THREAD_GUARD_PAGE:1}

Property

Description

hpx.location

This is initialized to the id of the locality this application instance is running on.

hpx.component_path

This is initialized to the list of directories where the HPX runtime library will look for installed components. Duplicates are discarded. This property can refer to a list of directories separated by ':' (Linux, Android, and MacOS) or using ';' (Windows).

hpx.master_ini_path

This is initialized to the list of default paths of the main hpx.ini configuration files. This property can refer to a list of directories separated by ':' (Linux, Android, and MacOS) or using ';' (Windows).

hpx.ini_path

This is initialized to the default path where HPX will look for more ini configuration files. This property can refer to a list of directories separated by ':' (Linux, Android, and MacOS) or using ';' (Windows).

hpx.os_threads

This setting reflects the number of OS-threads used for running HPX-threads. Defaults to 1.

hpx.localities

This setting reflects the number of localities the application is running on. Defaults to 1.

hpx.program_name

This setting reflects the program name of the application instance. Initialized from the command line (argv\[0\]).

hpx.cmd_line

This setting reflects the actual command line used to launch this application instance.

hpx.lock_detection

This setting verifies that no locks are being held while a HPX thread is suspended. This setting is applicable only if HPX_WITH_VERIFY_LOCKS is set during configuration in CMake.

hpx.throw_on_held_lock

This setting causes an exception if during lock detection at least one lock is being held while a HPX thread is suspended. This setting is applicable only if HPX_WITH_VERIFY_LOCKS is set during configuration in CMake. This setting has no effect if hpx.lock_detection=0

hpx.minimal_deadlock_detection

This setting enables support for minimal deadlock detection for HPX-threads. By default this is set to 1 (for Debug builds) or to 0 (for Release, RelWithDebInfo, RelMinSize builds), this setting is effective only if HPX_WITH_THREAD_DEADLOCK_DETECTION is set during configuration in CMake.

hpx.stacks.small_size

This is initialized to the small stack size to be used by HPX-threads. Set by default to the value of the compile time preprocessor constant HPX_SMALL_STACK_SIZE (defaults to 0x8000).

hpx.stacks.medium_size

This is initialized to the medium stack size to be used by HPX-threads. Set by default to the value of the compile time preprocessor constant HPX_MEDIUM_STACK_SIZE (defaults to 0x20000).

hpx.stacks.large_size

This is initialized to the large stack size to be used by HPX-threads. Set by default to the value of the compile time preprocessor constant HPX_LARGE_STACK_SIZE (defaults to 0x200000).

hpx.stacks.huge_size

This is initialized to the huge stack size to be used by HPX-threads. Set by default to the value of the compile time preprocessor constant HPX_HUGE_STACK_SIZE (defaults to 0x2000000).

hpx.stacks.use_guard_pages

This entry controls whether the coroutine library will generate stack guard pages or not. This entry is applicable on Linux only and only if the HPX_USE_GENERIC_COROUTINE_CONTEXT option is not enabled and the HPX_WITH_THREAD_GUARD_PAGE is set to 1 while configuring the build system. It is set by default to 1.

The hpx.threadpools Configuration Section

[hpx.threadpools]
io_pool_size = ${HPX_NUM_IO_POOL_SIZE:2}
parcel_pool_size = ${HPX_NUM_PARCEL_POOL_SIZE:2}
timer_pool_size = ${HPX_NUM_TIMER_POOL_SIZE:2}

Property

Description

hpx.threadpools.io_pool_size

The value of this property defines the number of OS-threads created for the internal I/O thread pool.

hpx.threadpools.parcel_pool_size

The value of this property defines the number of OS-threads created for the internal parcel thread pool.

hpx.threadpools.timer_pool_size

The value of this property defines the number of OS-threads created for the internal timer thread pool.

The hpx.components Configuration Section

[hpx.components]
load_external = ${HPX_LOAD_EXTERNAL_COMPONENTS:1}

Property

Description

hpx.components.load_external

This entry defines whether external components will be loaded on this locality. This entry normally is set to 1 and usually there is no need to directly change this value. It is automatically set to 0 for a dedicated AGAS server locality.

Additionally, the section hpx.components will be populated with the information gathered from all found components. The information loaded for each of the components will contain at least the following properties:

[hpx.components.<component_instance_name>]
name = <component_name>
path = <full_path_of_the_component_module>
enabled = $[hpx.components.load_external]

Property

Description

hpx.components.<component_instance_name>.name

This is the name of a component, usually the same as the second argument to the macro used while registering the component with HPX_REGISTER_COMPONENT. Set by the component factory.

hpx.components.<component_instance_name>.path

This is either the full path file name of the component module or the directory the component module is located in. In this case, the component module name will be derived from the property hpx.components.<component_instance_name>.name. Set by the component factory.

hpx.components.<component_instance_name>.enabled

This setting explicitly enables or disables the component. This is an optional property, HPX assumed that the component is enabled if it is not defined.

The value for <component_instance_name> is usually the same as for the corresponding name property. However generally it can be defined to any arbitrary instance name. It is used to distinguish between different ini sections, one for each component.

The hpx.parcel Configuration Section

[hpx.parcel]
address = ${HPX_PARCEL_SERVER_ADDRESS:<hpx_initial_ip_address>}
port = ${HPX_PARCEL_SERVER_PORT:<hpx_initial_ip_port>}
bootstrap = ${HPX_PARCEL_BOOTSTRAP:<hpx_parcel_bootstrap>}
max_connections = ${HPX_PARCEL_MAX_CONNECTIONS:<hpx_parcel_max_connections>}
max_connections_per_locality = ${HPX_PARCEL_MAX_CONNECTIONS_PER_LOCALITY:<hpx_parcel_max_connections_per_locality>}
max_message_size = ${HPX_PARCEL_MAX_MESSAGE_SIZE:<hpx_parcel_max_message_size>}
max_outbound_message_size = ${HPX_PARCEL_MAX_OUTBOUND_MESSAGE_SIZE:<hpx_parcel_max_outbound_message_size>}
array_optimization = ${HPX_PARCEL_ARRAY_OPTIMIZATION:1}
zero_copy_optimization = ${HPX_PARCEL_ZERO_COPY_OPTIMIZATION:$[hpx.parcel.array_optimization]}
async_serialization = ${HPX_PARCEL_ASYNC_SERIALIZATION:1}
enable_security = ${HPX_PARCEL_ENABLE_SECURITY:0}
message_handlers = ${HPX_PARCEL_MESSAGE_HANDLERS:0}

Property

Description

hpx.parcel.address

This property defines the default IP address to be used for the parcel layer to listen to. This IP address will be used as long as no other values are specified (for instance using the --hpx:hpx command line option). The expected format is any valid IP address or domain name format which can be resolved into an IP address. The default depends on the compile time preprocessor constant HPX_INITIAL_IP_ADDRESS ("127.0.0.1").

hpx.parcel.port

This property defines the default IP port to be used for the parcel layer to listen to. This IP port will be used as long as no other values are specified (for instance using the --hpx:hpx command line option). The default depends on the compile time preprocessor constant HPX_INITIAL_IP_PORT (7010).

hpx.parcel.bootstrap

This property defines which parcelport type should be used during application bootstrap. The default depends on the compile time preprocessor constant HPX_PARCEL_BOOTSTRAP ("tcp").

hpx.parcel.max_connections

This property defines how many network connections between different localities are overall kept alive by each of locality. The default depends on the compile time preprocessor constant HPX_PARCEL_MAX_CONNECTIONS (512).

hpx.parcel.max_connections_per_locality

This property defines the maximum number of network connections that one locality will open to another locality. The default depends on the compile time preprocessor constant HPX_PARCEL_MAX_CONNECTIONS_PER_LOCALITY (4).

hpx.parcel.max_message_size

This property defines the maximum allowed message size which will be transferrable through the parcel layer. The default depends on the compile time preprocessor constant HPX_PARCEL_MAX_MESSAGE_SIZE (1000000000) bytes.

hpx.parcel.max_outbound_message_size

This property defines the maximum allowed outbound coalesced message size which will be transferrable through the parcel layer. The default depends on the compile time preprocessor constant HPX_PARCEL_MAX_OUTBOUND_MESSAGE_SIZE (1000000) bytes.

hpx.parcel.array_optimization

This property defines whether this locality is allowed to utilize array optimizations during serialization of parcel data. The default is 1.

hpx.parcel.zero_copy_optimization

This property defines whether this locality is allowed to utilize zero copy optimizations during serialization of parcel data. The default is the same value as set for hpx.parcel.array_optimization.

hpx.parcel.async_serialization

This property defines whether this locality is allowed to spawn a new thread for serialization (this is both for encoding and decoding parcels). The default is 1.

hpx.parcel.enable_security

This property defines whether this locality is encrypting parcels. The default is 0.

hpx.parcel.message_handlers

This property defines whether message handlers are loaded. The default is 0.

The following settings relate to the TCP/IP parcelport.

[hpx.parcel.tcp]
enable = ${HPX_HAVE_PARCELPORT_TCP:1}
array_optimization = ${HPX_PARCEL_TCP_ARRAY_OPTIMIZATION:$[hpx.parcel.array_optimization]}
zero_copy_optimization = ${HPX_PARCEL_TCP_ZERO_COPY_OPTIMIZATION:$[hpx.parcel.zero_copy_optimization]}
async_serialization = ${HPX_PARCEL_TCP_ASYNC_SERIALIZATION:$[hpx.parcel.async_serialization]}
enable_security = ${HPX_PARCEL_TCP_ENABLE_SECURITY:$[hpx.parcel.enable_security]}
parcel_pool_size = ${HPX_PARCEL_TCP_PARCEL_POOL_SIZE:$[hpx.threadpools.parcel_pool_size]}
max_connections =  ${HPX_PARCEL_TCP_MAX_CONNECTIONS:$[hpx.parcel.max_connections]}
max_connections_per_locality = ${HPX_PARCEL_TCP_MAX_CONNECTIONS_PER_LOCALITY:$[hpx.parcel.max_connections_per_locality]}
max_message_size =  ${HPX_PARCEL_TCP_MAX_MESSAGE_SIZE:$[hpx.parcel.max_message_size]}
max_outbound_message_size =  ${HPX_PARCEL_TCP_MAX_OUTBOUND_MESSAGE_SIZE:$[hpx.parcel.max_outbound_message_size]}

Property

Description

hpx.parcel.tcp.enable

Enable the use of the default TCP parcelport. Note that the initial bootstrap of the overall HPX application will be performed using the default TCP connections. This parcelport is enabled by default. This will be disabled only if MPI is enabled (see below).

hpx.parcel.tcp.array_optimization

This property defines whether this locality is allowed to utilize array optimizations in the TCP/IP parcelport during serialization of parcel data. The default is the same value as set for hpx.parcel.array_optimization.

hpx.parcel.tcp.zero_copy_optimization

This property defines whether this locality is allowed to utilize zero copy optimizations in the TCP/IP parcelport during serialization of parcel data. The default is the same value as set for hpx.parcel.zero_copy_optimization.

hpx.parcel.tcp.async_serialization

This property defines whether this locality is allowed to spawn a new thread for serialization in the TCP/IP parcelport (this is both for encoding and decoding parcels). The default is the same value as set for hpx.parcel.async_serialization.

hpx.parcel.tcp.enable_security

This property defines whether this locality is encrypting parcels in the TCP/IP parcelport. The default is the same value as set for hpx.parcel.enable_security.

hpx.parcel.tcp.parcel_pool_size

The value of this property defines the number of OS-threads created for the internal parcel thread pool of the TCP parcel port. The default is taken from hpx.threadpools.parcel_pool_size.

hpx.parcel.tcp.max_connections

This property defines how many network connections between different localities are overall kept alive by each of locality. The default is taken from hpx.parcel.max_connections.

hpx.parcel.tcp.max_connections_per_locality

This property defines the maximum number of network connections that one locality will open to another locality. The default is taken from hpx.parcel.max_connections_per_locality.

hpx.parcel.tcp.max_message_size

This property defines the maximum allowed message size which will be transferrable through the parcel layer. The default is taken from hpx.parcel.max_message_size.

hpx.parcel.tcp.max_outbound_message_size

This property defines the maximum allowed outbound coalesced message size which will be transferrable through the parcel layer. The default is taken from hpx.parcel.max_outbound_connections.

The following settings relate to the shared memory parcelport (which is usable for communication between two localities on the same node). These settings take effect only if the compile time constant HPX_HAVE_PARCELPORT_IPC is set (the equivalent cmake variable is HPX_WITH_PARCELPORT_IPC, and has to be set to ON).

[hpx.parcel.ipc]
enable = ${HPX_HAVE_PARCELPORT_IPC:0}
data_buffer_cache_size=${HPX_PARCEL_IPC_DATA_BUFFER_CACHE_SIZE:512}
array_optimization = ${HPX_PARCEL_IPC_ARRAY_OPTIMIZATION:$[hpx.parcel.array_optimization]}
async_serialization = ${HPX_PARCEL_IPC_ASYNC_SERIALIZATION:$[hpx.parcel.async_serialization]}
enable_security = ${HPX_PARCEL_IPC_ENABLE_SECURITY:$[hpx.parcel.enable_security]}
parcel_pool_size = ${HPX_PARCEL_IPC_PARCEL_POOL_SIZE:$[hpx.threadpools.parcel_pool_size]}
max_connections =  ${HPX_PARCEL_IPC_MAX_CONNECTIONS:$[hpx.parcel.max_connections]}
max_connections_per_locality = ${HPX_PARCEL_IPC_MAX_CONNECTIONS_PER_LOCALITY:$[hpx.parcel.max_connections_per_locality]}
max_message_size =  ${HPX_PARCEL_IPC_MAX_MESSAGE_SIZE:$[hpx.parcel.max_message_size]}
max_outbound_message_size =  ${HPX_PARCEL_IPC_MAX_OUTBOUND_MESSAGE_SIZE:$[hpx.parcel.max_outbound_message_size]}

Property

Description

hpx.parcel.ipc.enable

Enable the use of the shared memory parcelport for connections between localities running on the same node. Note that the initial bootstrap of the overall HPX application will still be performed using the default ipc connections. This parcelport is disabled by default.

hpx.parcel.ipc.data_buffer_cache_size

This property specifies the number of cached data buffers used for interprocess communication between localities on the same node. The default depends on the compile time preprocessor constant HPX_PARCEL_IPC_DATA_BUFFER_CACHE_SIZE (512).

hpx.parcel.ipc.array_optimization

This property defines whether this locality is allowed to utilize array optimizations in the shared memory parcelport during serialization of parcel data. The default is the same value as set for hpx.parcel.array_optimization.

hpx.parcel.ipc.async_serialization

This property defines whether this locality is allowed to spawn a new thread for serialization in the shared memory parcelport (this is both for encoding and decoding parcels). The default is the same value as set for hpx.parcel.async_serialization.

hpx.parcel.ipc.enable_security

This property defines whether this locality is encrypting parcels in the shared memory parcelport. The default is the same value as set for hpx.parcel.enable_security.

hpx.parcel.ipc.parcel_pool_size

The value of this property defines the number of OS-threads created for the internal parcel thread pool of the ipc parcel port. The default is taken from hpx.threadpools.parcel_pool_size.

hpx.parcel.ipc.max_connections

This property defines how many network connections between different localities are overall kept alive by each of locality. The default is taken from hpx.parcel.max_connections.

hpx.parcel.ipc.max_connections_per_locality

This property defines the maximum number of network connections that one locality will open to another locality. The default is taken from hpx.parcel.max_connections_per_locality.

hpx.parcel.ipc.max_message_size

This property defines the maximum allowed message size which will be transferrable through the parcel layer. The default is taken from hpx.parcel.max_message_size.

hpx.parcel.ipc.max_outbound_message_size

This property defines the maximum allowed outbound coalesced message size which will be transferrable through the parcel layer. The default is taken from hpx.parcel.max_outbound_connections.

The following settings relate to the Infiniband parcelport. These settings take effect only if the compile time constant HPX_PARCELPORT_IBVERBS is set (the equivalent cmake variable is HPX_PARCELPORT_IBVERBS, and has to be set to ON).

[hpx.parcel.ibverbs]
enable = ${HPX_PARCELPORT_IBVERBS:0}
buffer_size = ${HPX_PARCEL_IBVERBS_BUFFER_SIZE:65536}
array_optimization = ${HPX_PARCEL_IBVERBS_ARRAY_OPTIMIZATION:$[hpx.parcel.array_optimization]}
async_serialization = ${HPX_PARCEL_IBVERBS_ASYNC_SERIALIZATION:$[hpx.parcel.async_serialization]}
enable_security = ${HPX_PARCEL_IBVERBS_ENABLE_SECURITY:$[hpx.parcel.enable_security]}
parcel_pool_size = ${HPX_PARCEL_IBVERBS_PARCEL_POOL_SIZE:$[hpx.threadpools.parcel_pool_size]}
max_connections =  ${HPX_PARCEL_IBVERBS_MAX_CONNECTIONS:$[hpx.parcel.max_connections]}
max_connections_per_locality = ${HPX_PARCEL_IBVERBS_MAX_CONNECTIONS_PER_LOCALITY:$[hpx.parcel.max_connections_per_locality]}
max_message_size =  ${HPX_PARCEL_IBVERBS_MAX_MESSAGE_SIZE:$[hpx.parcel.max_message_size]}
max_outbound_message_size =  ${HPX_PARCEL_IBVERBS_MAX_OUTBOUND_MESSAGE_SIZE:$[hpx.parcel.max_outbound_message_size]}

Property

Description

hpx.parcel.ibverbs.enable

Enable the use of the ibverbs parcelport for connections between localities running on a node with infiniband capable hardware. Note that the initial bootstrap of the overall HPX application will still be performed using the default TCP/IP connections. This parcelport is disabled by default.

hpx.parcel.ibverbs.buffer_size

This property specifies the size in bytes of the buffers registered to the infiniband hardware. Parcels which are smaller than this will be serialized and sent over the network in a zero-copy fashion. Parcels bigger than this will be transparently copied to a big enough temporary buffer.

hpx.parcel.ibverbs.array_optimization

This property defines whether this locality is allowed to utilize array optimizations in the ibverbs parcelport during serialization of parcel data. The default is the same value as set for hpx.parcel.array_optimization.

hpx.parcel.ibverbs.async_serialization

This property defines whether this locality is allowed to spawn a new thread for serialization in the ibverbs parcelport (this is both for encoding and decoding parcels). The default is the same value as set for hpx.parcel.async_serialization.

hpx.parcel.ibverbs.enable_security

This property defines whether this locality is encrypting parcels in the ibverbs parcelport. The default is the same value as set for hpx.parcel.enable_security.

hpx.parcel.ibverbs.parcel_pool_size

The value of this property defines the number of OS-threads created for the internal parcel thread pool of the ibverbs parcel port. The default is taken from hpx.threadpools.parcel_pool_size.

hpx.parcel.ibverbs.max_connections

This property defines how many network connections between different localities are overall kept alive by each of locality. The default is taken from hpx.parcel.max_connections.

hpx.parcel.ibverbs.max_connections_per_locality

This property defines the maximum number of network connections that one locality will open to another locality. The default is taken from hpx.parcel.max_connections_per_locality.

hpx.parcel.ibverbs.max_message_size

This property defines the maximum allowed message size which will be transferrable through the parcel layer. The default is taken from hpx.parcel.max_message_size.

hpx.parcel.ibverbs.max_outbound_message_size

This property defines the maximum allowed outbound coalesced message size which will be transferrable through the parcel layer. The default is taken from hpx.parcel.max_outbound_connections.

The following settings relate to the MPI parcelport. These settings take effect only if the compile time constant HPX_HAVE_PARCELPORT_MPI is set (the equivalent cmake variable is HPX_WITH_PARCELPORT_MPI, and has to be set to ON).

[hpx.parcel.mpi]
enable = ${HPX_HAVE_PARCELPORT_MPI:1}
env = ${HPX_HAVE_PARCELPORT_MPI_ENV:MV2_COMM_WORLD_RANK,PMI_RANK,OMPI_COMM_WORLD_SIZE,ALPS_APP_PE}
multithreaded = ${HPX_HAVE_PARCELPORT_MPI_MULTITHREADED:0}
rank = <MPI_rank>
processor_name = <MPI_processor_name>
array_optimization = ${HPX_HAVE_PARCEL_MPI_ARRAY_OPTIMIZATION:$[hpx.parcel.array_optimization]}
zero_copy_optimization = ${HPX_HAVE_PARCEL_MPI_ZERO_COPY_OPTIMIZATION:$[hpx.parcel.zero_copy_optimization]}
use_io_pool = ${HPX_HAVE_PARCEL_MPI_USE_IO_POOL:$1}
async_serialization = ${HPX_HAVE_PARCEL_MPI_ASYNC_SERIALIZATION:$[hpx.parcel.async_serialization]}
enable_security = ${HPX_HAVE_PARCEL_MPI_ENABLE_SECURITY:$[hpx.parcel.enable_security]}
parcel_pool_size = ${HPX_HAVE_PARCEL_MPI_PARCEL_POOL_SIZE:$[hpx.threadpools.parcel_pool_size]}
max_connections =  ${HPX_HAVE_PARCEL_MPI_MAX_CONNECTIONS:$[hpx.parcel.max_connections]}
max_connections_per_locality = ${HPX_HAVE_PARCEL_MPI_MAX_CONNECTIONS_PER_LOCALITY:$[hpx.parcel.max_connections_per_locality]}
max_message_size =  ${HPX_HAVE_PARCEL_MPI_MAX_MESSAGE_SIZE:$[hpx.parcel.max_message_size]}
max_outbound_message_size =  ${HPX_HAVE_PARCEL_MPI_MAX_OUTBOUND_MESSAGE_SIZE:$[hpx.parcel.max_outbound_message_size]}

Property

Description

hpx.parcel.mpi.enable

Enable the use of the MPI parcelport. HPX tries to detect if the application was started within a parallel MPI environment. If the detection was succesful, the MPI parcelport is enabled by default. To explicitly disable the MPI parcelport, set to 0. Note that the initial bootstrap of the overall HPX application will be performed using MPI as well.

hpx.parcel.mpi.env

This property influences which environment variables (comma separated) will be analyzed to find out whether the application was invoked by MPI.

hpx.parcel.mpi.multithreaded

This property is used to determine what threading mode to use when initializing MPI. If this setting is 0, HPX will initialize MPI with MPI_THREAD_SINGLE, if the value is not equal to 0 HPX will will initialize MPI with MPI_THREAD_MULTI.

hpx.parcel.mpi.rank

This property will be initialized to the MPI rank of the locality.

hpx.parcel.mpi.processor_name

This property will be initialized to the MPI processor name of the locality.

hpx.parcel.mpi.array_optimization

This property defines whether this locality is allowed to utilize array optimizations in the MPI parcelport during serialization of parcel data. The default is the same value as set for hpx.parcel.array_optimization.

hpx.parcel.mpi.zero_copy_optimization

This property defines whether this locality is allowed to utilize zero copy optimizations in the MPI parcelport during serialization of parcel data. The default is the same value as set for hpx.parcel.zero_copy_optimization.

hpx.parcel.mpi.use_io_pool

This property can be set to run the progress thread inside of HPX threads instead of a separate thread pool. The default is 1.

hpx.parcel.mpi.async_serialization

This property defines whether this locality is allowed to spawn a new thread for serialization in the MPI parcelport (this is both for encoding and decoding parcels). The default is the same value as set for hpx.parcel.async_serialization.

hpx.parcel.mpi.enable_security

This property defines whether this locality is encrypting parcels in the MPI parcelport. The default is the same value as set for hpx.parcel.enable_security.

hpx.parcel.mpi.parcel_pool_size

The value of this property defines the number of OS-threads created for the internal parcel thread pool of the MPI parcel port. The default is taken from hpx.threadpools.parcel_pool_size.

hpx.parcel.mpi.max_connections

This property defines how many network connections between different localities are overall kept alive by each of locality. The default is taken from hpx.parcel.max_connections.

hpx.parcel.mpi.max_connections_per_locality

This property defines the maximum number of network connections that one locality will open to another locality. The default is taken from hpx.parcel.max_connections_per_locality.

hpx.parcel.mpi.max_message_size

This property defines the maximum allowed message size which will be transferrable through the parcel layer. The default is taken from hpx.parcel.max_message_size.

hpx.parcel.mpi.max_outbound_message_size

This property defines the maximum allowed outbound coalesced message size which will be transferrable through the parcel layer. The default is taken from hpx.parcel.max_outbound_connections.

The hpx.agas Configuration Section

[hpx.agas]
address = ${HPX_AGAS_SERVER_ADDRESS:<hpx_initial_ip_address>}
port = ${HPX_AGAS_SERVER_PORT:<hpx_initial_ip_port>}
service_mode = hosted
dedicated_server = 0
max_pending_refcnt_requests = ${HPX_AGAS_MAX_PENDING_REFCNT_REQUESTS:<hpx_initial_agas_max_pending_refcnt_requests>}
use_caching = ${HPX_AGAS_USE_CACHING:1}
use_range_caching = ${HPX_AGAS_USE_RANGE_CACHING:1}
local_cache_size = ${HPX_AGAS_LOCAL_CACHE_SIZE:<hpx_agas_local_cache_size>}

Property

Description

hpx.agas.address

This property defines the default IP address to be used for the AGAS root server. This IP address will be used as long as no other values are specified (for instance using the --hpx:agas command line option). The expected format is any valid IP address or domain name format which can be resolved into an IP address. The default depends on the compile time preprocessor constant HPX_INITIAL_IP_ADDRESS ("127.0.0.1").

hpx.agas.port

This property defines the default IP port to be used for the AGAS root server. This IP port will be used as long as no other values are specified (for instance using the --hpx:agas command line option). The default depends on the compile time preprocessor constant HPX_INITIAL_IP_PORT (7010).

hpx.agas.service_mode

This property specifies what type of AGAS service is running on this locality. Currently, two modes exist. The locality that acts as the AGAS server runs in bootstrap mode. All other localities are in hosted mode.

hpx.agas.dedicated_server

This property specifies whether the AGAS server is exclusively running AGAS services and not hosting any application components. It is a boolean value. Set to 1 if --hpx-run-agas-server-only is present.

hpx.agas.max_pending_refcnt_requests

This property defines the number of reference counting requests (increments or decrements) to buffer. The default depends on the compile time preprocessor constant HPX_INITIAL_AGAS_MAX_PENDING_REFCNT_REQUESTS (4096).

hpx.agas.use_caching

This property specifies whether a software address translation cache is used. It is a boolean value. Defaults to 1.

hpx.agas.use_range_caching

This property specifies whether range-based caching is used by the software address translation cache. This property is ignored if hpx.agas.use_caching is false. It is a boolean value. Defaults to 1.

hpx.agas.local_cache_size

This property defines the size of the software address translation cache for AGAS services. This property is ignored if hpx.agas.use_caching is false. Note that if hpx.agas.use_range_caching is true, this size will refer to the maximum number of ranges stored in the cache, not the number of entries spanned by the cache. The default depends on the compile time preprocessor constant HPX_AGAS_LOCAL_CACHE_SIZE (4096).

The hpx.commandline Configuration Section

The following table lists the definition of all pre-defined command line option shortcuts. For more information about commandline options see the section HPX Command Line Options.

[hpx.commandline]
aliasing = ${HPX_COMMANDLINE_ALIASING:1}
allow_unknown = ${HPX_COMMANDLINE_ALLOW_UNKNOWN:0}

[hpx.commandline.aliases]
-a = --hpx:agas
-c = --hpx:console
-h = --hpx:help
-I = --hpx:ini
-l = --hpx:localities
-p = --hpx:app-config
-q = --hpx:queuing",
-r = --hpx:run-agas-server
-t = --hpx:threads
-v = --hpx:version
-w = --hpx:worker
-x = --hpx:hpx
-0 = --hpx:node=0
-1 = --hpx:node=1
-2 = --hpx:node=2
-3 = --hpx:node=3
-4 = --hpx:node=4
-5 = --hpx:node=5
-6 = --hpx:node=6
-7 = --hpx:node=7
-8 = --hpx:node=8
-9 = --hpx:node=9

Property

Description

hpx.commandline.aliasing

Enable command line aliases as defined in the section hpx.commandline.aliases (see below). Defaults to 1.

hpx.commandline.allow_unknown

Allow for unknown command line options to be passed through to hpx_main(). Defaults to 0.

hpx.commandline.aliases.-a

On the commandline, -a expands to: --hpx:agas

hpx.commandline.aliases.-c

On the commandline, -c expands to: --hpx:console

hpx.commandline.aliases.-h

On the commandline, -h expands to: --hpx:help

hpx.commandline.aliases.--help

On the commandline, --help expands to: --hpx:help

hpx.commandline.aliases.-I

On the commandline, -I expands to: --hpx:ini

hpx.commandline.aliases.-l

On the commandline, -l expands to: --hpx:localities

hpx.commandline.aliases.-p

On the commandline, -p expands to: --hpx:app-config

hpx.commandline.aliases.-q

On the commandline, -q expands to: --hpx:queuing

hpx.commandline.aliases.-r

On the commandline, -r expands to: --hpx:run-agas-server

hpx.commandline.aliases.-t

On the commandline, -t expands to: --hpx:threads

hpx.commandline.aliases.-v

On the commandline, -v expands to: --hpx:version

hpx.commandline.aliases.--version

On the commandline, --version expands to: --hpx:version

hpx.commandline.aliases.-w

On the commandline, -w expands to: --hpx:worker

hpx.commandline.aliases.-x

On the commandline, -x expands to: --hpx:hpx

hpx.commandline.aliases.-0

On the commandline, -0 expands to: --hpx:node=0

hpx.commandline.aliases.-1

On the commandline, -1 expands to: --hpx:node=1

hpx.commandline.aliases.-2

On the commandline, -2 expands to: --hpx:node=2

hpx.commandline.aliases.-3

On the commandline, -3 expands to: --hpx:node=3

hpx.commandline.aliases.-4

On the commandline, -4 expands to: --hpx:node=4

hpx.commandline.aliases.-5

On the commandline, -5 expands to: --hpx:node=5

hpx.commandline.aliases.-6

On the commandline, -6 expands to: --hpx:node=6

hpx.commandline.aliases.-7

On the commandline, -7 expands to: --hpx:node=7

hpx.commandline.aliases.-8

On the commandline, -8 expands to: --hpx:node=8

hpx.commandline.aliases.-9

On the commandline, -9 expands to: --hpx:node=9

During startup and after the internal database has been initialized as described in the section Built-in Default Configuration Settings, HPX will try to locate and load additional ini files to be used as a source for configuration properties. This allows for a wide spectrum of additional customization possibilities by the user and system administrators. The sequence of locations where HPX will try loading the ini files is well defined and documented in this section. All ini files found are merged into the internal configuration database. The merge operation itself conforms to the rules as described in the section The HPX INI File Format.

  1. Load all component shared libraries found in the directories specified by the property hpx.component_path and retrieve their default configuration information (see section Loading Components for more details). This property can refer to a list of directories separated by ':' (Linux, Android, and MacOS) or using ';' (Windows).
  2. Load all files named hpx.ini in the directories referenced by the property hpx.master_ini_path. This property can refer to a list of directories separated by ':' (Linux, Android, and MacOS) or using ';' (Windows).
  3. Load a file named .hpx.ini in the current working directory, e.g. the directory the application was invoked from.
  4. Load a file referenced by the environment variable HPX_INI. This variable is expected to provide the full path name of the ini configuration file (if any).
  5. Load a file named /etc/hpx.ini. This lookup is done on non-Windows systems only.
  6. Load a file named .hpx.ini in the home directory of the current user, e.g. the directory referenced by the environment variable HOME.
  7. Load a file named .hpx.ini in the directory referenced by the environment variable PWD.
  8. Load the file specified on the command line using the option --hpx:config.
  9. Load all properties specified on the command line using the option --hpx:ini. The properties will be added to the database in the same sequence as they are specified on the command line. The format for those options is for instance --hpx:ini=hpx.default_stack_size=0x4000. In adddition to the explicit command line options, this will set the following properties as implied from other settings:
  10. Load files based on the pattern *.ini in all directories listed by the property hpx.ini_path. All files found during this search will be merged. The property hpx.ini_path can hold a list of directories separated by ':' (on Linux or Mac) or ';' (on Windows)
  11. Load the file specified on the command line using the option --hpx:app-config. Note that this file will be merged as the content for a top level section [application].
[Note]Note

Any changes made to the configuration database caused by one of the steps will influence the loading process for all subsequent steps. For instance, if one of the ini files loaded changes the property hpx.ini_path, this will influence the directories searched in step 9 as described above.

[Important]Important

The HPX core library will verify that all configuration settings specified on the command line (using the --hpx:ini option) will be checked for validity. That means that the library will accept only known configuration settings. This is to protect the user from unintentional typos while specifying those settings. This behavior can be overwritten by appending a '!' to the configuration key, thus forcing the setting to be entered into the configuration database, for instance: --hpx:ini=hpx.foo! = 1.

If any of the environment variables or files listed above is not found the corresponding loading step will be silently skipped.

HPX relies on loading application specific components during the runtime of an application. Moreover, HPX comes with a set of preinstalled components supporting basic functionalities useful for almost every application. Any component in HPX is loaded from a shared library, where any of the shared libraries can contain more than one component type. During startup, HPX tries to locate all available components (e.g. their corresponding shared libraries) and creates an internal component registry for later use. This section describes the algorithm used by HPX to locate all relevant shared libraries on a system. As described, this algorithm is customizable by the configuration properties loaded from the ini files (see section Loading INI Files).

Loading components is a two stage process. First HPX tries to locate all component shared libraries, loads those, and generates default configuration section in the internal configuration database for each component found. For each found component the following information is generated:

[hpx.components.<component_instance_name>]
name = <name_of_shared_library>
path = $[component_path]
enabled = $[hpx.components.load_external]
default = 1

The values in this section correspond to the expected configuration information for a component as described in the section Built-in Default Configuration Settings.

In order to locate component shared libraries, HPX will try loading all shared libraries (files with the platform specific extension of a shared library, Linux: *.so, Windows: *.dll, MacOS: *.dylib) found in the directory referenced by the ini property hpx.component_path.

This first step corresponds to step 1) during the process of filling the internal configuration database with default information as described in section Loading INI Files.

After all of the configuration information has been loaded, HPX performs the second step in terms of loading components. During this step, HPX scans all existing configuration sections [hpx.component.<some_component_instance_name>] and instantiates a special factory object for each of the successfully located and loaded components. During the application's life time, these factory objects will be responsible to create new and discard old instances of the component they are associated with. This step is performed after step 11) of the process of filling the internal configuration database with default information as described in section Loading INI Files.

In this section we assume to have a simple application component which exposes one member function as a component action. The header file app_server.hpp declares the C++ type to be exposed as a component. This type has a member function print_greating() which is exposed as an action (print_greating_action). We assume the source files for this example are located in a directory referenced by $APP_ROOT:

// file: $APP_ROOT/app_server.hpp
#include <hpx/hpx.hpp>
#include <hpx/include/iostreams.hpp>

namespace app
{
    // Define a simple component exposing one action 'print_greating'
    class HPX_COMPONENT_EXPORT server
      : public hpx::components::simple_component_base<server>
    {
        void print_greating ()
        {
            hpx::cout << "Hey, how are you?\n" << hpx::flush;
        }

        // Component actions need to be declared, this also defines the
        // type 'print_greating_action' representing the action.
        HPX_DEFINE_COMPONENT_ACTION(server, print_greating, print_greating_action);
    };
}

// Declare boilerplate code required for each of the component actions.
HPX_REGISTER_ACTION_DECLARATION(app::server::print_greating_action);

The corresponding source file contains mainly macro invocations which define boilerplate code needed for HPX to function properly:

// file: $APP_ROOT/app_server.cpp
#include "app_server.hpp"

// Define boilerplate required once per component module.
HPX_REGISTER_COMPONENT_MODULE();

// Define factory object associated with our component of type 'app::server'.
HPX_REGISTER_COMPONENT(app::server, app_server);

// Define boilerplate code required for each of the component actions. Use the
// same argument as used for HPX_REGISTER_ACTION_DECLARATION above.
HPX_REGISTER_ACTION(app::server::print_greating_action);

The following gives an example of how the component can be used. We create one instance of the app::server component on the current locality and invoke the exposed action print_greating_action using the global id of the newly created instance. Note, that no special code is required to delete the component instance after it is not needed anymore. It will be deleted automatically when its last reference goes out of scope, here at the closing brace of the block surrounding the code.

// file: $APP_ROOT/use_app_server_example.cpp
#include <hpx/hpx_init.hpp>
#include "app_server.hpp"

int hpx_main()
{
    {
        // Create an instance of the app_server component on the current locality.
        hpx::naming:id_type app_server_instance =
            hpx::create_component<app::server>(hpx::find_here());

        // Create an instance of the action 'print_greating_action'.
        app::server::print_greating_action print_greating;

        // Invoke the action 'print_greating' on the newly created component.
        print_greating(app_server_instance);
    }
    return hpx::finalize();
}

int main(int argc, char* argv[])
{
    return hpx::init(argc, argv);
]

In order to make sure that the application will be able to use the component app::server, special configuration information must be passed to HPX. The simples way to allow HPX to 'find' the component is to provide special ini configuration files, which add the necessary information to the internal configuration database. The component should have a special ini file containing the information specific to the component app_server:

# file: $APP_ROOT/app_server.ini
[hpx.components.app_server]
name = app_server
path = $APP_LOCATION/

Here $APP_LOCATION is the directory where the (binary) component shared library is located. HPX will attempt to load the shared library from there. The section name hpx.components.app_server reflects the instance name of the component (app_server is an arbitrary, but unique name) . The property value for hpx.components.app_server.name should be the same as used for the second argument to the macro HPX_REGISTER_COMPONENT above.

Additionally a file .hpx.ini which could be located in the current working directory (see step 3 as described in the section Loading INI Files) can be used to add to the ini search path for components:

# file: $PWD/.hpx.ini
[hpx]
ini_path = $[hpx.ini_path]:$APP_ROOT/

This assumes that the above ini file specific to the component is located in the directory $APP_ROOT.

[Note]Note

It is possible to reference the defined property from inside its value. HPX will gracefully use the previous value of hpx.ini_path for the reference on the right hand side and assign the overall (now expanded) value to the property.

HPX uses a sophisticated logging framework allowing to follow in detail what operations have been performed inside the HPX library in what sequence. This information proves to be very useful for diagnosing problems or just for improving the understanding what is happening in HPX as a consequence of invoking HPX API functionality.

Default Logging

Enabling default logging is a simple process. The detailed description in the remainder of this section explains different ways to customize the defaults. Default logging can be enabled by using one of the following:

  • a command line switch --hpx:debug-hpx-log, which will enable logging to the console terminal
  • the command line switch --hpx:debug-hpx-log=<filename>, which enables logging to a given file <filename>, or
  • setting an environment variable HPX_LOGLEVEL=<loglevel> while running the HPX application. In this case <loglevel> should be a number between (or equal to) 1 and 5, where 1 means minimal logging and 5 causes to log all available messages. When setting a the environment variable the logs will be written to a file named hpx.<PID>.login the current working directory, where <PID> is the process id of the console instance of the application.
Customizing Logging

Generally, logging can be customized either using environment variable settings or using by an ini configuration file. Logging is generated in several categories, each of which can be customized independently. All customizable configuration parameters have reasonable defaults, allowing to use logging without any additional configuration effort. The following table lists the available categories.

Table 9. Logging categories

Category

Category shortcut

Information to be generated

Environment variable

General

None

Logging information generated by different subsystems of HPX, such as thread-manager, parcel layer, LCOs, etc.

HPX_LOGLEVEL

AGAS

AGAS

Logging output generated by the AGAS subsystem

HPX_AGAS_LOGLEVEL

Application

APP

Logging generated by applications.

HPX_APP_LOGLEVEL


By default, all logging output is redirected to the console instance of an application, where it is collected and written to a file, one file for each logging category.

Each logging category can be customized at two levels, the parameters for each are stored in the ini configuration sections hpx.logging.CATEGORY and hpx.logging.console.CATEGORY (where 'CATEGORY' is the category shortcut as listed in the table above). The former influences logging at the source locality and the latter modifies the logging behaviour for each of the categories at the console instance of an application.

Levels

All HPX logging output have seven different logging levels. These levels can be set explicitly or through environmental variables in the main HPX ini file as shown below. The logging levels and their associated integral values are shown in the table below, ordered from most verbose to least verbose. By default, all HPX logs are set to 0, e.g. all logging output is disabled by default.

Table 10. Logging levels

Logging level

Integral value

<debug>

5

<info>

4

<warning>

3

<error>

2

<fatal>

1

No logging

0


[Tip]Tip

The easiest way to enable logging output is to set the environment variable corresponding to the logging category to an integral value as described in the table above. For instance, setting HPX_LOGLEVEL=5 will enable full logging output for the general category. Please note, that the syntax and means of setting environment variables varies between operating systems.

Configuration

Logs will be saved to destinations as configured by the user. By default, logging output is saved on the console instance of an application to hpx.<CATEGORY>.<PID>.log (where <CATEGORY> and <PID> are placeholders for the category shortcut and the OS process id). The output for the general logging category is saved to hpx.<PID>.log. The default settings for the general logging category are shown here (the syntax is described in the section The HPX INI File Format):

[hpx.logging]
level = ${HPX_LOGLEVEL:0}
destination = ${HPX_LOGDESTINATION:console}
format = ${HPX_LOGFORMAT:(T%locality%/%hpxthread%.%hpxphase%/%hpxcomponent%) P%parentloc%/%hpxparent%.%hpxparentphase% %time%($hh:$mm.$ss.$mili) [%idx%]|\\n}

The logging level is taken from the environment variable HPX_LOGLEVEL and defaults to zero, e.g. no logging. The default logging destination is read from the environment variable HPX_LOGDESTINATION. On any of the localities it defaults to console which redirects all generated logging output to the console instance of an application. The following table lists the possible destinations for any logging output. It is possible to specify more than one destination separated by whitespace.

Table 11. Logging destinations

Logging destination

Description

file(<filename>)

Direct all output to a file with the given <filename>.

cout

Direct all output to the local standard output of the application instance on this locality.

cerr

Direct all output to the local standard error output of the application instance on this locality.

console

Direct all output to the console instance of the application. The console instance has its logging destinations configured separately.

android_log

Direct all output to the (Android) system log (available on Android systems only).


The logging format is read from the environment variable HPX_LOGFORMAT and it defaults to a complex format description. This format consists of several placeholder fields (for instance %locality%) which will be replaced by concrete values when the logging output is generated. All other information is transferred verbatim to the output. The table below describes the available field placeholders. The separator character | separates the logging message prefix formatted as shown and the actual log message which will replace the separator.

Table 12. Available field placeholders

Name

Description

locality

The id of the locality on which the logging message was generated.

hpxthread

The id of the HPX-thread generating this logging output.

hpxphase

The phase[a] of the HPX-thread generating this logging output.

hpxcomponent

The local virtual address of the component which the current HPX-thread is accessing.

parentloc

The id of the locality where the HPX thread was running which initiated the current HPX-thread. The current HPX-thread is generating this logging output.

hpxparent

The id of the HPX-thread which initiated the current HPX-thread. The current HPX-thread is generating this logging output.

hpxparentphase

The phase of the HPX-thread when it initiated the current HPX-thread. The current HPX-thread is generating this logging output.

time

The time stamp for this logging outputline as generated by the source locality.

idx

The sequence number of the logging output line as generated on the source locality.

osthread

The sequence number of the OS-thread which executes the current HPX-thread.

[a] The phase of a HPX-thread counts how often this thread has been activated


[Note]Note

Not all of the field placeholder may be expanded for all generated logging output. If no value is available for a particular field it is replaced with a sequence of '-' characters.

Here is an example line from a logging output generated by one of the HPX examples (please note that this is generated on a single line, without line break):

(T00000000/0000000002d46f90.01/00000000009ebc10) P--------/0000000002d46f80.02 17:49.37.320 [000000000000004d]
    <info>  [RT] successfully created component {0000000100ff0001, 0000000000030002} of type: component_barrier[7(3)]

The default settings for the general logging category on the console is shown here:

[hpx.logging.console]
level = ${HPX_LOGLEVEL:$[hpx.logging.level]}
destination = ${HPX_CONSOLE_LOGDESTINATION:file(hpx.$[system.pid].log)}
format = ${HPX_CONSOLE_LOGFORMAT:|}

These settings define how the logging is customized once the logging output is received by the console instance of an application. The logging level is read from the environment variable HPX_LOGLEVEL (as set for the console instance of the application). The level defaults to the same values as the corresponding settings in the general logging configuration shown before. The destination on the console instance is set to be a file which name is generated based from its OS process id. Setting the environment variable HPX_CONSOLE_LOGDESTINATION allows customization of the naming scheme for the output file. The logging format is set to leave the original logging output unchanged, as received from one of the localities the application runs on.

The predefined command line options for any application using hpx::init are described in the table below:

Table 13. Default HPX Command Line Options

Option

Description

HPX options (allowed on command line only)

 

--hpx:help, --help or -h

print out program usage (default: this message), possible values: 'full' (additionally prints options from components)

--hpx:version, --version or -v

print out HPX version and copyright information

--hpx:info

print out HPX configuration information

--hpx:options-file arg

specify a file containing command line options (alternatively: @filepath)

HPX options (additionally allowed in an options file)

 

--hpx:worker

run this instance in worker mode

--hpx:console

run this instance in console mode

--hpx:connect

run this instance in worker mode, but connecting late

--hpx:run-agas-server

run AGAS server as part of this runtime instance

--hpx:run-hpx-main

run the hpx_main function, regardless of locality mode

--hpx:hpx arg

the IP address the HPX parcelport is listening on, expected format: 'address:port' (default: 127.0.0.1:7910)

--hpx:agas arg

the IP address the AGAS root server is running on, expected format: 'address:port' (default: 127.0.0.1:7910)

--hpx:run-agas-server-only

run only the AGAS server

--hpx:nodefile arg

the file name of a node file to use (list of nodes, one node name per line and core)

--hpx:nodes arg

the (space separated) list of the nodes to use (usually this is extracted from a node file)

--hpx:endnodes

this can be used to end the list of nodes specified using the option --hpx:nodes

--hpx:ifsuffix arg

suffix to append to host names in order to resolve them to the proper network interconnect

--hpx:ifprefix arg

prefix to prepend to host names in order to resolve them to the proper network interconnect

--hpx:iftransform arg

sed-style search and replace (s/search/replace/) used to transform host names to the proper network interconnect

--hpx:localities arg

the number of localities to wait for at application startup (default: 1)

--hpx:node arg

number of the node this locality is run on (must be unique)

--hpx:ignore-batch-env

ignore batch environment variables

--hpx:expect-connecting-localities

this locality expects other localities to dynamically connect (this is implied if the number of initial localities is larger than 1)

--hpx:pu-offset

the first processing unit this instance of HPX should be run on (default: 0), valid for --hpx:queuing=local, --hpx:queuing=abp-priority, --hpx:queuing=static, and --hpx:queuing=local-priority only

--hpx:pu-step

the step between used processing unit numbers for this instance of HPX (default: 1), valid for --hpx:queuing=local, --hpx:queuing=abp-priority, --hpx:queuing=static and --hpx:queuing=local-priority only

--hpx:threads arg

the number of operating system threads to spawn for this HPX locality (default: 1, using 'all' will spawn one thread for each processing unit)

--hpx:cores arg

the number of cores to utilize for this HPX locality locality (default: 'all', i.e. the number of cores is based on the number of threads (--hpx:threads) assuming --hpx:bind=compact)

--hpx:affinity arg

the affinity domain the OS threads will be confined to, possible values: pu, core, numa, machine (default: pu), valid for --hpx:queuing=local, --hpx:queuing=abp-priority, --hpx:queuing=static, and --hpx:queuing=local-priority only

--hpx:bind arg

the detailed affinity description for the OS threads, see the additional documentation for a detailed description of possible values. Do not use with --hpx:pu-step, --hpx:pu-offset, or --hpx:affinity options. Implies --hpx:numa-sensitive (--hpx:bind=none disables defining thread affinities).

--hpx:print-bind

print to the console the bit masks calculated from the arguments specified to all --hpx:bind options.

--hpx:queuing arg

the queue scheduling policy to use, options are 'local/l', 'local-priority/lo', 'abp/a', 'abp-priority', 'hierarchy/h', and 'periodic/pe' (default: local-priority/lo)

--hpx:hierarchy-arity

the arity of the of the thread queue tree, valid for --hpx:queuing=hierarchy only (default: 2)

--hpx:high-priority-threads arg

the number of operating system threads maintaining a high priority queue (default: number of OS threads), valid for --hpx:queuing=local, --hpx:queuing=abp-priority, and --hpx:queuing=local-priority only

--hpx:numa-sensitive

makes the local-priority scheduler NUMA sensitive, valid for --hpx:queuing=local, --hpx:queuing=abp-priority, --hpx:queuing=static, and --hpx:queuing=local-priority only

HPX configuration options

 

--hpx:app-config arg

load the specified application configuration (ini) file

--hpx:config arg

load the specified hpx configuration (ini) file

--hpx:ini arg

add a configuration definition to the default runtime configuration

--hpx:exit

exit after configuring the runtime

HPX debugging options

 

--hpx:list-symbolic-names

list all registered symbolic names after startup

--hpx:list-component-types

list all dynamic component types after startup

--hpx:dump-config-initial

print the initial runtime configuration

--hpx:dump-config

print the final runtime configuration

--hpx:debug-hpx-log [arg]

enable all messages on the HPX log channel and send all HPX logs to the target destination (default: cout)

--hpx:debug-agas-log [arg]

enable all messages on the AGAS log channel and send all AGAS logs to the target destination (default: cout)

--hpx:debug-parcel-log [arg]

enable all messages on the parcel transport log channel and send all parcel transport logs to the target destination (default: cout)

--hpx:debug-clp

debug command line processing

--hpx:attach-debugger arg

wait for a debugger to be attached, possible arg values: startup or exception (default: startup)

HPX options related to performance counters

 

--hpx:print-counter

print the specified performance counter either repeatedly and/or at the times specified by --hpx:print-counter-at (see also option --hpx:print-counter-interval)

--hpx:print-counter-interval

print the performance counter(s) specified with --hpx:print-counter repeatedly after the time interval (specified in milliseconds), (default: 0, which means print once at shutdown)

--hpx:print-counter-destination

print the performance counter(s) specified with --hpx:print-counter to the given file (default: console)

--hpx:list-counters

list the names of all registered performance counters, possible values: minimal (prints counter name skeletons), full (prints all available counter names)

--hpx:list-counter-infos

list the description of all registered performance counters, possible values: minimal (prints info for counter name skeletons), 'full' (prints all available counter infos)

--hpx:print-counter-format

print the performance counter(s) specified with --hpx:print-counter, possible formats in csv format with header or without any header (see option --hpx:no-csv-header), possible values: csv (prints counter values in CSV format with full names as header), csv-short (prints counter values in CSV format with shortnames provided with --hpx:print-counter as --hpx:print-counter shortname,full-countername)

--hpx:no-csv-header

print the performance counter(s) specified with --hpx:print-counter and csv or csv-short format specified with --hpx:print-counter-format without header

--hpx:printer-counter-at arg

print the performance counter(s) specified with --hpx:print-counter at the given point in time, possible argument values: startup, shutdown (default), noshutdown

--hpx:reset-counters

reset the performance counter(s) specified with --hpx:print-counter after they have been evaluated


Command Line Argument Shortcuts

Additionally, the following shortcuts are available from every HPX application.

Table 14. Predefined command line option shortcuts

Shortcut option

Equivalent long option

-a

--hpx:agas

-c

--hpx:console

-h

--hpx:help

-I

--hpx:ini

-l

--hpx:localities

-p

--hpx:app-config

-q

--hpx:queuing

-r

--hpx:run-agas-server

-t

--hpx:threads

-v

--hpx:version

-w

--hpx:worker

-x

--hpx:hpx

-0

--hpx:node=0

-1

--hpx:node=1

-2

--hpx:node=2

-3

--hpx:node=3

-4

--hpx:node=4

-5

--hpx:node=5

-6

--hpx:node=6

-7

--hpx:node=7

-8

--hpx:node=8

-9

--hpx:node=9


It is possible to define your own shortcut options. In fact, all of the shortcuts listed above are pre-defined using the technique described here. In fact, it is possible to redefine any of the pre-defined shortcuts to expand differently as well.

Shortcut options are obtained from the internal configuration database. They are stored as key-value properties in a special properties section named hpx.commandline. You can define your own shortcuts by adding the corresponding definitions to one of the ini configuration files as described in the section Configure HPX Applications. For instance, in order to define a command line shortcut --pc which should expand to --hpx:print-counter, the following configuration information needs to be added to one of the ini configuration files:

[hpx.commandline]
--pc = --hpx:print-counter
[Note]Note

Any arguments for shortcut options passed on the command line are retained and passed as arguments to the corresponding expanded option. For instance, given the definition above, the command line option

--pc=/threads{locality#0/total}/count/cumulative

would be expanded to

--hpx:print-counter=/threads{locality#0/total}/count/cumulative

[Important]Important

Any shortcut option should either start with a single '-' or with two '--' characters. Shortcuts starting with a single '-' are interpreted as short options (i.e. everything after the first character following the '-' is treated as the argument). Shortcuts starting with '--' are interpreted as long options. No other shortcut formats are supported.

Specifying Options for Single Localities Only

For runs involving more than one locality it is sometimes desirable to supply specific command line options to single localities only. When the HPX application is launched using a scheduler (like PBS, for more details see section Using PBS), specifying dedicated command line options for single localities may be desirable. For this reason all of the command line options which have the general format --hpx:<some_key> can be used in a more general form: --hpx:<N>:<some_key>, where <N> is the number of the locality this command line options will be applied to, all other localities will simply ignore the option. For instance, the following PBS script passes the option --hpx:pu-offset=4 to the locality '1' only.

#!/bin/bash
#
#PBS -l nodes=2:ppn=4

APP_PATH=~/packages/hpx/bin/hello_world
APP_OPTIONS=

pbsdsh -u $APP_PATH $APP_OPTIONS --hpx:1:pu-offset=4 --hpx:nodes=`cat $PBS_NODEFILE`
[Caution]Caution

If the first application specific argument (inside $APP_OPTIONS) is a non-option (i.e. does not start with a '-' or a '--', then it must be placed before the option --hpx:nodes, which, in this case, should be the last option on the command line.

Alternatively, use the option --hpx:endnodes to explicitly mark the end of the list of node names:

pbsdsh -u $APP_PATH --hpx:1:pu-offset=4 --hpx:nodes=`cat $PBS_NODEFILE` --hpx:endnodes $APP_OPTIONS

This section documents the following list of the command line options in more detail:

The Command Line Option --hpx:bind

This command line option allows one to specify the required affinity of the HPX worker threads to the underlying processing units. As a result the worker threads will run only on the processing units identified by the corresponding bind specification. The affinity settings are to be specified using --hpx:bind=<BINDINGS>, where <BINDINGS> have to be formatted as described below.

In addition to the syntax described below one can use --hpx:bind=none to disable all binding of any threads to a particular core. This is mostly supported for debugging purposes.

[Note]Note

This command line option is only available if HPX was built with support for HWLOC (Portable Hardware Locality (HWLOC)) enabled. Please see CMake Variables used to configure HPX for more details on how to enable support for HWLOC in HPX.

The specified affinities refer to specific regions within a machine hardware topology. In order to understand the hardware topology of a particular machine it may be useful to run the lstopo tool which is part of Portable Hardware Locality (HWLOC) to see the reported topology tree. Seeing and understanding a topology tree will definitely help in understanding the concepts that are discussed below.

Affinities can be specified using HWLOC (Portable Hardware Locality (HWLOC)) tuples. Tuples of HWLOC objects and associated indexes can be specified in the form object:index, object:index-index, or object:index,...,index. HWLOC objects represent types of mapped items in a topology tree. Possible values for objects are socket, numanode, core, and pu (processing unit). Indexes are non-negative integers that specify a unique physical object in a topology tree using its logical sequence number.

Chaining multiple tuples together in the more general form object1:index1[.object2:index2[...]] is permissible. While the first tuple's object may appear anywhere in the topology, the Nth tuple's object must have a shallower topology depth than the (N+1)th tuple's object. Put simply: as you move right in a tuple chain, objects must go deeper in the topology tree. Indexes specified in chained tuples are relative to the scope of the parent object. For example, socket:0.core:1 refers to the second core in the first socket (all indices are zero based).

Multiple affinities can be specified using several --hpx:bind command line options or by appending several affinities separated by a ';'. By default, if multiple affinities are specified, they are added.

"all" is a special affinity consisting in the entire current topology.

[Note]Note

All 'names' in an affinity specification, such as thread, socket, numanode, pu, or all can be abbreviated. Thus the affinity specification threads:0-3=socket:0.core:1.pu:1 is fully equivalent to its shortened form t:0-3=s:0.c:1.p:1.

Here is a full grammar describing the possible format of mappings:

mappings:
    distribution
    mapping(;mapping)*

distribution:
    'compact'
    'scatter
    'balanced'

mapping:
    thread-spec=pu-specs

thread-spec:
    'thread':range-specs

pu-specs:
    pu-spec(.pu-spec)*

pu-spec:
    type:range-specs
    ~pu-spec

range-specs:
    range-spec(,range-spec)*

range-spec:
    int
    int-int
    'all'

type:
    'socket' | 'numanode'
    'core'
    'pu'

The following example assumes a system with at least 4 cores, where each core has more than 1 processing unit (hardware threads). Running hello_world with 4 OS-threads (on 4 processing units), where each of those threads is bound to the first processing unit of each of the cores, can be achieved by invoking:

hello_world -t4 --hpx:bind=thread:0-3=core:0-3.pu:0

Here thread:0-3 specifies the OS threads for which to define affinity bindings, and core:0-3.pu:0 defines that for each of the cores (core:0-3) only their first processing unit (pu:0) should be used.

[Note]Note

The command line option --hpx:print-bind can be used to print the bitmasks generated from the affinity mappings as specified with --hpx:bind. For instance, on a system with hyperthreading enabled (i.e. 2 processing units per core), the command line:

hello_world -t4 --hpx:bind=thread:0-3=core:0-3.pu:0 --hpx:print-bind

will cause this output to be printed:

0: PU L#0(P#0), Core L#0, Socket L#0, Node L#0(P#0)
1: PU L#2(P#2), Core L#1, Socket L#0, Node L#0(P#0)
2: PU L#4(P#4), Core L#2, Socket L#0, Node L#0(P#0)
3: PU L#6(P#6), Core L#3, Socket L#0, Node L#0(P#0)

where each bit in the bitmasks corresponds to a processing unit the listed worker thread will be bound to run on.

The difference between the three possible predefined distribution schemes (compact, scatter, and balanced) is best explained with an example. Imagine that we have a system with 4 cores and 4 hardware threads per core. If we place 8 threads the assignments produced by the compact, scatter, and balanced types are shown in eh figure below. Notice that compact does not fully utilize all the cores in the system. For this reason it is recommended that applications are run using the scatter or balanced options in most cases.

Figure 7. Schematic of thread affinity type distributions

Schematic of thread affinity type distributions


The HPX I/O-streams subsystem extends the standard C++ output streams std::cout and std::cerr to work in the distributed setting of an HPX application. All of the output streamed to hpx::cout will be dispatched to std::cout on the console locality. Likewise, all output generated from hpx::cerr will be dispatched to std::cerr on the console locality.

[Note]Note

All existing standard manipulators can be used in conjunction with hpx::cout and hpx::cerr. Historically, HPX also defines hpx::endl and hpx::flush, but those are just aliases for the corresponding standard manipulators.

In order to use either hpx::cout or hpx::cerr, application codes need to #include <hpx/include/iostreams.hpp>. For an example, please see the simplest possible 'Hello world' program as included as an example with HPX:

// Including 'hpx/hpx_main.hpp' instead of the usual 'hpx/hpx_init.hpp' enables
// to use the plain C-main below as the direct main HPX entry point.
#include <hpx/hpx_main.hpp>
#include <hpx/include/iostreams.hpp>

int main()
{
    // Say hello to the world!
    hpx::cout << "Hello World!\n" << hpx::flush;
    return 0;
}

Additionally those applications need to link with the iostreams component. When using cmake this can be achieved by using the COMPONENT_DEPENDENCIES parameter, for instance:

include(HPX_AddExecutable)

add_hpx_executable(
    simplest_hello_world
    SOURCES simplest_hello_world.cpp
    COMPONENT_DEPENDENCIES iostreams
)
[Note]Note

The hpx::cout and hpx::cerr streams buffer all output locally until a std::endl or std::flush is encountered. That means that no output will appear on the console as long as either of those is explicitly used.

In order to write an application which uses services from the HPX runtime system you need to initialize the HPX library by inserting certain calls into the code of your application. Depending on your use case, this can be done in 3 different ways:

  • Minimally invasive: Re-use the main() function as the main HPX entry point.
  • Balanced use case: Supply your own main HPX entry point while blocking the main thread.
  • Most flexibility: Supply your own main HPX entry point while avoiding to block the main thread.
Re-use the main() function as the main HPX entry point

This method is the least intrusive to your code. It however provides you with the smallest flexibility in terms of initializing the HPX runtime system only. The following code snippet shows what a minimal HPX application using this technique looks like:

#include <hpx/hpx_main.hpp>

int main(int argc, char* argv[])
{
    return 0;
}

The only change to your code you have to make is to include the file hpx/hpx_main.hpp. In this case the function main() will be invoked as the first HPX thread of the application. The runtime system will be initialized behind the scenes before the function main() is executed and will automatically stopped after main() has returned. All HPX API functions can be used from within this function now.

[Note]Note

The function main() does not need to expect receiving argc/argv as shown above, but could expose the signature int main(). This is consistent with the usually allowed prototypes for the function main() in C++ applications.

All command line arguments specific to HPX will still be processed by the HPX runtime system as usual. However, those command line options will be removed from the list of values passed to argc/argv of the function main(). The list of values passed to main() will hold only the commandline options which are not recognized by the HPX runtime system (see the section HPX Command Line Options for more details on what options are recognized by HPX).

The value returned from the function main() as shown above will be returned to the operating system as usual.

[Important]Important

To achieve this seamless integration, the header file hpx/hpx_main.hpp defines a macro

#define main hpx_startup::user_main

which could result in unexpected behavior.

Supply your own main HPX entry point while blocking the main thread

With this method you need to provide an explicit main thread function named hpx_main at global scope. This function will be invoked as the main entry point of your HPX application on the console locality only (this function will be invoked as the first HPX thread of your application). All HPX API functions can be used from within this function.

The thread executing the function hpx::init will block waiting for the runtime system to exit. The value returned from hpx_main will be returned from hpx::init after the runtime system has stopped.

The function hpx::finalize has to be called on one of the HPX localities in order to signal that all work has been scheduled and the runtime system should be stopped after the scheduled work has been executed.

This method of invoking HPX has the advantage of you being able to decide which version of hpx::init to call. This allows to pass additional configuration parameters while initializing the HPX runtime system.

#include <hpx/hpx_init.hpp>

int hpx_main(int argc, char* argv[])
{
    // Any HPX application logic goes here...
    return hpx::finalize();
}

int main(int argc, char* argv[])
{
    // Initialize HPX, run hpx_main as the first HPX thread, and
    // wait for hpx::finalize being called.
    return hpx::init(argc, argv);
}
[Note]Note

The function hpx_main does not need to expect receiving argc/argv as shown above, but could expose one of the following signatures:

int hpx_main();
int hpx_main(int argc, char* argv[]);
int hpx_main(boost::program_options::variables_map& vm);

This is consistent with (and extends) the usually allowed prototypes for the function main() in C++ applications.

The header file to include for this method of using HPX is hpx/hpx_init.hpp.

Supply your own main HPX entry point while avoiding to block the main thread

With this method you need to provide an explicit main thread function named hpx_main at global scope. This function will be invoked as the main entry point of your HPX application on the console locality only (this function will be invoked as the first HPX thread of your application). All HPX API functions can be used from within this function.

The thread executing the function hpx::start will not block waiting for the runtime system to exit, but will return immediatlely.

[Important]Important

You cannot use any of the HPX API functions other that hpx::stop from inside your main() function.

The function hpx::finalize has to be called on one of the HPX localities in order to signal that all work has been scheduled and the runtime system should be stopped after the scheduled work has been executed.

This method of invoking HPX is useful for applications where the main thread is used for special operations, such a GUIs. The function hpx::stop can be used to wait for the HPX runtime system to exit and should be at least used as the last function called in main(). The value returned from hpx_main will be returned from hpx::stop after the runtime system has stopped.

#include <hpx/hpx_start.hpp>

int hpx_main(int argc, char* argv[])
{
    // Any HPX application logic goes here...
    return hpx::finalize();
}

int main(int argc, char* argv[])
{
    // Initialize HPX, run hpx_main.
    hpx::start(argc, argv);

    // ...Execute other code here...

    // Wait for hpx::finalize being called.
    return hpx::stop();
}
[Note]Note

The function hpx_main does not need to expect receiving argc/argv as shown above, but could expose one of the following signatures:

int hpx_main();
int hpx_main(int argc, char* argv[]);
int hpx_main(boost::program_options::variables_map& vm);

This is consistent with (and extends) the usually allowed prototypes for the function main() in C++ applications.

The header file to include for this method of using HPX is hpx/hpx_start.hpp.

HPX implements an Active Global Address Space (AGAS) which is exposing a single uniform address space spanning all localities an application runs on. AGAS is a fundamental component of the ParalleX execution model. Conceptually, there is no rigid demarcation of local or global memory in AGAS; all available memory is a part of the same address space. AGAS enables named objects to be moved (migrated) across localities without having to change the object's name, i.e., no references to migrated objects have to be ever updated. This feature has significance for dynamic load balancing and in applications where the workflow is highly dynamic, allowing work to be migrated from heavily loaded nodes to less loaded nodes. In addition, immutability of names ensures that AGAS does not have to keep extra indirections ("bread crumbs") when objects move, hence minimizing complexity of code management for system developers as well as minimizing overheads in maintaining and managing aliases.

The AGAS implementation in HPX does not automatically expose every local address to the global address space. It is the responsibility of the programmer to explicitly define which of the objects have to be globally visible and which of the objects are purely local.

In HPX global addresses (global names) are represented using the hpx::id_type data type. This data type is conceptually very similar to void* pointers as it does not expose any type information of the object it is referring to.

The only predefined global addresses are assigned to all localities. The following HPX API functions allow one to retrieve the global addresses of localities:

  • hpx::find_here(): retrieve the global address of the locality this function is called on.
  • hpx::find_all_localities(): retrieve the global addresses of all localities available to this application (including the locality the function is being called on).
  • hpx::find_remote_localities(): retrieve the global addresses of all remote localities available to this application (not including the locality the function is being called on)
  • hpx::get_num_localities(): retrieve the number of localities available to this application.
  • hpx::find_locality(): retrieve the global address of any locality supporting the given component type.
  • hpx::get_colocation_id(): retrieve the global address of the locality currently hosting the object with the given global address.

Additionally, the global addresses of localities can be used to create new instances of components using the following HPX API function:

  • hpx::new_<Component>(): Create a new instance of the given Component type on the specified locality.
[Note]Note

HPX does not expose any functionality to delete component instances. All global addresses (as represented using hpx::id_type) are automatically garbage collected. When the last (global) reference to a particular component instance goes out of scope the corresponding component instance is automatically deleted.

Actions are special types we use to describe possibly remote operations. For every global function and every member function which has to be invoked distantly, a special type must be defined. For any global function the special macro HPX_PLAIN_ACTION can be used to define the action type. Here is an example demonstrating this:

namespace app
{
    void some_global_function(double d)
    {
        cout << d;
    }
}

// This will define the action type 'some_global_action' which represents
// the function 'app::some_global_function'.
HPX_PLAIN_ACTION(app::some_global_function, some_global_action);
[Important]Important

The macro HPX_PLAIN_ACTION has to be placed in global namespace, even if the wrapped function is located in some other namespace. The newly defined action type is placed in the global namespace as well.

If the action type should be defined somewhere not in global namespace, the action type definition has to be split into two macro invocations (HPX_DEFINE_PLAIN_ACTION and HPX_REGISTER_ACTION) as shown in the next example:

namespace app
{
    void some_global_function(double d)
    {
        cout << d;
    }

    // On conforming compilers the following macro expands to:
    //
    //    typedef hpx::actions::make_action<
    //        decltype(&some_global_function), &some_global_function
    //    >::type some_global_action;
    //
    // This will define the action type 'some_global_action' which represents
    // the function 'some_global_function'.
    HPX_DEFINE_PLAIN_ACTION(some_global_function, some_global_action);
}

// The following macro expands to a series of definitions of global objects
// which are needed for proper serialization and initialization support
// enabling the remote invocation of the function `some_global_function`.
HPX_REGISTER_ACTION(app::some_global_action, app_some_global_action);

The shown code defines an action type some_global_action inside the namespace app.

[Important]Important

If the action type definition is split between two macros as shown above, the name of the action type to create has to be the same for both macro invocations (here some_global_action).

[Important]Important

The second argument passed to HPX_REGISTER_ACTION (app_some_global_action) has to comprise a globally unique C++ identifier representing the action. This is used for serialization purposes.

For member functions of objects which have been registered with AGAS (e.g. 'components') a different registration macro HPX_DEFINE_COMPONENT_ACTION has to be utilized. Any component needs to be declared in a header file and have some special support macros defined in a source file. Here is an example demonstrating this. The first snippet has to go into the header file:

namespace app
{
    struct some_component
      : hpx::components::simple_component_base<some_component>
    {
        int some_member_function(std::string s)
        {
            return boost::lexical_cast<int>(s);
        }

        // This will define the action type 'some_member_action' which
        // represents the member function 'some_member_function' of the
        // obect type 'some_component'.
        HPX_DEFINE_COMPONENT_ACTION(some_component, some_member_function,
            some_member_action);
    };
}

// Note: The second arguments to the macro below have to be systemwide-unique
//       C++ identifiers
HPX_REGISTER_ACTION_DECLARATION(app::some_component::some_member_action, some_component_some_action);

The next snippet belongs into a source file (e.g. the main application source file) in the most simple case:

typedef hpx::components::simple_component<app::some_component> component_type;
typedef app::some_component some_component;

HPX_REGISTER_COMPONENT(component_type, some_component);

// The parameters for this macro have to be the same as used in the corresponding
// HPX_REGISTER_ACTION_DECLARATION() macro invocation above
typedef some_component::some_member_action some_component_some_action;
HPX_REGISTER_ACTION(some_component_some_action);

Granted, these macro invocations are a bit more complex than for simple global functions, however we believe they are still manageable.

The most important macro invocation is the HPX_DEFINE_COMPONENT_ACTION in the header file as this defines the action type we need to invoke the member function. For a complete example of a simple component action see component_in_executable.cpp

The process of invoking a global function (or a member function of an object) with the help of the associated action is called 'applying the action'. Actions can have arguments, which will be supplied while the action is applied. At the minimum, one parameter is required to apply any action - the id of the locality the associated function should be invoked on (for global functions), or the id of the component instance (for member functions). Generally, HPX provides several ways to apply an action, all of which are described in the following sections.

Generally, HPX actions are very similar to 'normal' C++ functions except that actions can be invoked remotely. The figure below shows an overview of the main API exposed by HPX. This shows the function invocation syntax as defined by the C++ language (dark gray), the additional invocation syntax as provided through C++ Standard Library features (medium gray), and the extensions added by HPX (light gray). Where: f: function to invoke; p...: (optional) arguments; R: return type of f; action: action type defined by HPX_DEFINE_PLAIN_ACTION or HPX_DEFINE_COMPONENT_ACTION encapsulating f; a: an instance of the type action; id: the global address the action is applied to.

Figure 8. Overview of the main API exposed by HPX

Overview of the main API exposed by HPX


This figure shows that HPX allows the user to apply actions with a syntax similar to the C++ standard. In fact, all action types have an overloaded function operator allowing to synchronously apply the action. Further, HPX implements hpx::async, which semantically works similar to the way std::async works for plain C++ function.

[Note]Note

The similarity of applying an action to conventional function invocations extends even further. HPX implements hpx::bind and hpx::function: two facilities which are semantically equivalent to the std::bind and std::function types as defined by the C++11 Standard. While hpx::async extends beyond the conventional semantics by supporting actions and conventional C++ functions, the HPX facilities hpx::bind and hpx::function extend beyond the conventional standard facilities too. The HPX facilities not only support conventional functions, but can be used for actions as well.

Additionally, HPX exposes hpx::apply and hpx::async_continue, both of which refine and extend the standard C++ facilities.

The different ways to invoke a function in HPX will be explained in more detail in the following sections

This method ('fire and forget') will make sure the function associated with the action is scheduled to run on the target locality. Applying the action does not wait for the function to start running, instead it is a fully asynchronous operation. The following example shows how to apply the action as defined in the previous section on the local locality (the locality this code runs on):

some_global_action act;     // define an instance of some_global_action
hpx::apply(act, hpx::find_here(), 2.0);

(the function hpx::find_here() returns the id of the local locality, i.e. the locality this code executes on).

Any component member function can be invoked using the same syntactic construct. Given that id is the global address for a component instance created earlier, this invocation looks like:

some_component_action act;     // define an instance of some_component_action
hpx::apply(act, id, "42");

In this case any value returned from this action (e.g. in this case the integer 42) is ignored. Please look at Action Type Definition for the code defining the component action (some_component_action) used.

This method will make sure the action is scheduled to run on the target locality. Applying the action itself does not wait for the function to start running or to complete, instead this is a fully asynchronous operation similar to using hpx::apply as described above. The difference is that this method will return an instance of a hpx::future<> encapsulating the result of the (possibly remote) execution. The future can be used to synchronize with the asynchronous operation. The following example shows how to apply the action from above on the local locality:

some_global_action act;     // define an instance of some_global_action
hpx::future<void> f = hpx::async(act, hpx::find_here(), 2.0);
//
// ... other code can be executed here
//
f.get();    // this will possibly wait for the asyncronous operation to 'return'

(as before, the function hpx::find_here() returns the id of the local locality (the locality this code is executed on).

[Note]Note

The use of a hpx::future<void> allows the current thread to synchronize with any remote operation not returning any value.

[Note]Note

Any std::future<> returned from std::async() is required to block in its destructor if the value has not been set for this future yet. This is not true for hpx::future<> which will never block in its destriuctor, even if the value has not been returned to the future yet. We believe that consistency in the behavior of futures is more important than standards conformance in this case.

Any component member function can be invoked using the same syntactic construct. Given that id is the global address for a component instance created earlier, this invocation looks like:

some_component_action act;     // define an instance of some_component_action
hpx::future<int> f = hpx::async(act, id, "42");
//
// ... other code can be executed here
//
cout << f.get();    // this will possibly wait for the asyncronous operation to 'return' 42
[Note]Note

The invocation of f.get() will return the result immediately (without suspending the calling thread) if the result from the asynchronous operation has already been returned. Otherwise, the invocation of f.get() will suspend the execution of the calling thread until the asynchronous operation returns its result.

This method will schedule the function wrapped in the specified action on the target locality. While the invocation appears to be synchronous (as we will see), the calling thread will be suspended while waiting for the function to return. Invoking a plain action (e.g. a global function) synchronously is straightforward:

some_global_action act;     // define an instance of some_global_action
act(hpx::find_here(), 2.0);

While this call looks just like a normal synchronous function invocation, the function wrapped by the action will be scheduled to run on a new thread and the calling thread will be suspended. After the new thread has executed the wrapped global function, the waiting thread will resume and return from the synchronous call.

Equivalently, any action wrapping a component member function can be invoked synchronously as follows:

some_component_action act;     // define an instance of some_component_action
int result = act(id, "42");

The action invocation will either schedule a new thread locally to execute the wrapped member function (as before, id is the global address of the component instance the member function should be invoked on), or it will send a parcel to the remote locality of the component causing a new thread to be scheduled there. The calling thread will be suspended until the function returns its result. This result will be returned from the synchronous action invocation.

It is very important to understand that this 'synchronous' invocation syntax in fact conceals an asynchronous function call. This is beneficial as the calling thread is suspended while waiting for the outcome of a potentially remote operation. The HPX thread scheduler will schedule other work in the mean time, allowing the application to make further progress while the remote result is computed. This helps overlapping computation with communication and hiding communication latencies.

[Note]Note

The syntax of applying an action is always the same, regardless whether the target locality is remote to the invocation locality or not. This is a very important feature of HPX as it frees the user from the task of keeping track what actions have to be applied locally and which actions are remote. If the target for applying an action is local, a new thread is automatically created and scheduled. Once this thread is scheduled and run, it will execute the function encapsulated by that action. If the target is remote, HPX will send a parcel to the remote locality which encapsulates the action and its parameters. Once the parcel is received on the remote locality HPX will create and schedule a new thread there. Once this thread runs on the remote locality, it will execute the function encapsulated by the action.

This method is very similar to the method described in section Applying an Action Asynchronously without any Synchronization. The difference is that it allows the user to chain a sequence of asynchronous operations, while handing the (intermediate) results from one step to the next step in the chain. Where hpx::apply invokes a single function using 'fire and forget' semantics, hpx::apply_continue asynchronously triggers a chain of functions without the need for the execution flow 'to come back' to the invocation site. Each of the asynchronous functions can be executed on a different locality.

This method is very similar to the method described in section Applying an Action Asynchronously with Synchronization. In addition to what hpx::async can do, the functions hpx::async_continue takes an additional function argument. This function will be called as the continuation of the executed action. It is expected to perform additional operations and to make sure that a result is returned to the original invocation site. This method chains operations asynchronously by providing a continuation operation which is automatically executed once the first action has finished executing.

As an example we chain two actions, where the result of the first action is forwarded to the second action and the result of the second action is sent back to the original invocation site:

// first action
boost::int32_t action1(boost::int32_t i)
{
    return i+1;
}
HPX_PLAIN_ACTION(action1);    // defines action1_type

// second action
boost::int32_t action2(boost::int32_t i)
{
    return i*2;
}
HPX_PLAIN_ACTION(action2);    // defines action2_type

// this code invokes 'action1' above and passes along a continuation
// function which will forward the result returned from 'action1' to
// 'action2'.
action1_type act1;     // define an instance of 'action1_type'
action2_type act2;     // define an instance of 'action2_type'
hpx::future<int> f =
    hpx::async_continue(act1, hpx::find_here(), 42,
        hpx::make_continuation(act2));
hpx::cout << f.get() << "\n";   // will print: 86 ((42 + 1) * 2)

By default, the continuation is executed on the same locality as hpx::async_continue is invoked from. If you want to specify the locality where the continuation should be executed, the code above has to be written as:

// this code invokes 'action1' above and passes along a continuation
// function which will forward the result returned from 'action1' to
// 'action2'.
action1_type act1;     // define an instance of 'action1_type'
action2_type act2;     // define an instance of 'action2_type'
hpx::future<int> f =
    hpx::async_continue(act1, hpx::find_here(), 42,
        hpx::make_continuation(act2, hpx::find_here()));
hpx::cout << f.get() << "\n";   // will print: 86 ((42 + 1) * 2)

Similarily, it is possible to chain more than 2 operations:

action1_type act1;     // define an instance of 'action1_type'
action2_type act2;     // define an instance of 'action2_type'
hpx::future<int> f =
    hpx::async_continue(act1, hpx::find_here(), 42,
        hpx::make_continuation(act2,
            hpx::make_continuation(act1)));
hpx::cout << f.get() << "\n";   // will print: 87 ((42 + 1) * 2 + 1)

The function hpx::make_continuation creates a special function object which exposes the following prototype:

struct continuation
{
    template <typename Result>
    void operator()(hpx::id_type id, Result&& result) const
    {
        ...
    }
};

where the parameters passed to the overloaded function operator (operator()()) are:

  • the id is the global id where the final result of the asynchronous chain of operations should be sent to (in most cases this is the id of the hpx::future returned from the initial call to hpx::async_continue). Any custom continuation function should make sure this id is forwarded to the last operation in the chain.
  • the result is the result value of the current operation in the asynchronous execution chain. This value needs to be forwarded to the next operation.
[Note]Note

All of those operations are implemented by the predefined continuation function object which is returned from hpx::make_continuation. Any (custom) function object used as a continuation should conform to the same interface.

Like in any other asynchronous invocation scheme it is important to be able to handle error conditions occurring while the asynchronous (and possibly remote) operation is executed. In HPX all error handling is based on standard C++ exception handling. Any exception thrown during the execution of an asynchronous operation will be transferred back to the original invocation locality, where it is rethrown during synchronization with the calling thread.

[Important]Important

Exceptions thrown during asynchronous execution can be transferred back to the invoking thread only for the synchronous and the asynchronous case with synchronization. Like with any other unhandled exception, any exception thrown during the execution of an asynchronous action without synchronization will result in calling hpx::terminate, causing the running application to exit immediately.

[Note]Note

Even if error handling internally relies on exceptions, most of the API functions exposed by HPX can be used without throwing an exception. Please see Error Handling for more information.

As an example, we will assume that the following remote function will be executed:

namespace app
{
    void some_function_with_error(int arg)
    {
        if (arg < 0) {
            HPX_THROW_EXCEPTION(bad_argument, "some_function_with_error",
                "some really bad error happened");
        }
        // do something else...
    }
}

// This will define the action type 'some_error_action' which represents
// the function 'app::some_function_with_error'.
HPX_PLAIN_ACTION(app::some_function_with_error, some_error_action);

The use of HPX_THROW_EXCEPTION to report the error encapsulates the creation of a hpx::exception which is initialized with the error code hpx::bad_parameter. Additionally it carries the passed strings, the information about the file name, line number, and call stack of the point the exception was thrown from.

We invoke this action using the synchronous syntax as described before:

// note: wrapped function will throw hpx::exception
some_error_action act;            // define an instance of some_error_action
try {
    act(hpx::find_here(), -3);    // exception will be rethrown from here
}
catch (hpx::exception const& e) {
    // prints: 'some really bad error happened: HPX(bad parameter)'
    cout << e.what();
}

If this action is invoked asynchronously with synchronization, the exception is propagated to the waiting thread as well and is re-thrown from the future's function get():

// note: wrapped function will throw hpx::exception
some_error_action act;            // define an instance of some_error_action
hpx::future<void> f = hpx::async(act, hpx::find_here(), -3);
try {
    f.get();                      // exception will be rethrown from here
}
catch (hpx::exception const& e) {
    // prints: 'some really bad error happened: HPX(bad parameter)'
    cout << e.what();
}

For more information about error handling please refer to the section Error Handling. There we also explain how to handle error conditions without having to rely on exception.

A component in HPX is a C++ class which can be created remotely and for which its member functions can be invoked remotely as well. The following sections highlight how components can be defined, created, and used.

In order for a C++ class type to be managed remotely in HPX, the type must be derived from the hpx::components::simple_component_base template type. We call such C++ class types 'components'.

Note that the component type itself is passed as a template argument to the base class.

// header file some_component.hpp

#include <hpx/include/components.hpp>

namespace app
{
    // Define a new component type 'some_component'
    struct some_component
      : hpx::components::simple_component_base<some_component>
    {
        // This member function is has to be invoked remotely
        int some_member_function(std::string const& s)
        {
            return boost::lexical_cast<int>(s);
        }

        // This will define the action type 'some_member_action' which
        // represents the member function 'some_member_function' of the
        // obect type 'some_component'.
        HPX_DEFINE_COMPONENT_ACTION(some_component, some_member_function, some_member_action);
    };
}

// This will generate the necessary boiler-plate code for the action allowing
// it to be invoked remotely. This declaration macro has to be placed in the
// header file defining the component itself.
//
// Note: The second arguments to the macro below have to be systemwide-unique
//       C++ identifiers
//
HPX_REGISTER_ACTION_DECLARATION(app::some_component::some_member_action, some_component_some_action);

There is more boiler plate code which has to be placed into a source file in order for the component to be usable. Every component type requires to have macros placed into its source file, one for each component type and one macro for each of the actions defined by the component type.

For instance:

// source file some_component.cpp

#include "some_component.hpp"

// The following code generates all necessary boiler plate to enable the
// remote creation of 'app::some_component' instances with 'hpx::new_<>()'
//
using some_component = app::some_component;
using component_type = hpx::components::simple_component<some_component>;

// Please note that the second argument to this macro must be a
// (system-wide) unique C++-style identifier (without any namespaces)
//
HPX_REGISTER_COMPONENT(component_type, some_component);

// The parameters for this macro have to be the same as used in the corresponding
// HPX_REGISTER_ACTION_DECLARATION() macro invocation in the corresponding
// header file.
//
// Please note that the second argument to this macro must be a
// (system-wide) unique C++-style identifier (without any namespaces)
//
HPX_REGISTER_ACTION(app::some_component::some_member_action, some_component_some_action);

Often it is very convenient to define a separate type for a component which can be used on the client side (from where the component is instantiated and used). This step might seem as unnecessary duplicating code, however it significantly increases the type safety of the code.

A possible implementation of such a client side representation for the component described in the previous section could look like:

#include <hpx/include/components.hpp>

namespace app
{
    // Define a client side representation type for the component type
    // 'some_component' defined in the previous section.
    //
    struct some_component_client
      : hpx::components::client_base<some_component_client, some_component>
    {
        using base_type = hpx::components::client_base<
                some_component_client, some_component>;

        some_component_client(hpx::future<hpx::id_type> && id)
          : base_type(std::move(id))
        {}

        hpx::future<int> some_member_function(std::string const& s)
        {
            some_component::some_member_action act;
            return hpx::async(act, get_id(), s);
        }
    };
}

A client side object stores the global id of the component instance it represents. This global id is accessible by calling the function client_base<>::get_id(). The special constructor which is provided in the example allows to create this client side object directly using the API function hpx::new_<>().

Instances of defined component types can be created in two different ways. If the component to create has a defined client side representation type, then this can be used, otherwise use the server type.

The following examples assume that component_type is the type of the server side implementation of the component to create. All additional arguments (see , ... notation below) are passed through to the corresponding constructor calls of those objects.

// create one instance on the given locality
hpx::id_type here = hpx::find_here();
hpx::future<hpx::id_type> f =
    hpx::new_<component_type>(here, ...);

// create one instance using the given distribution
// policy (here: hpx::colocating_distribution_policy)
hpx::id_type here = hpx::find_here();
hpx::future<hpx::id_type> f =
    hpx::new_<component_type>(hpx::colocated(here), ...);


// create multiple instances on the given locality
hpx::id_type here = find_here();
hpx::future<std::vector<hpx::id_type>> f =
    hpx::new_<component_type[]>(here, num, ...);

// create multiple instances using the given distribution
// policy (here: hpx::binpacking_distribution_policy)
hpx::future<std::vector<hpx::id_type>> f = hpx::new_<component_type[]>(
    hpx::binpacking(hpx::find_all_localities()), num, ...);

The examples below demonstrate the use of the same API functions for creating client side representation objects (instead of just plain ids). These examples assume that client_type is the type of the client side representation of the component type to create. As above, all additional arguments (see , ... notation below) are passed through to the corresponding constructor calls of the server side implementation objects corresponding to the client_type.

// create one instance on the given locality
hpx::id_type here = hpx::find_here();
client_type c = hpx::new_<client_type>(here, ...);

// create one instance using the given distribution
// policy (here: hpx::colocating_distribution_policy)
hpx::id_type here = hpx::find_here();
client_type c = hpx::new_<client_type>(hpx::colocated(here), ...);


// create multiple instances on the given locality
hpx::id_type here = hpx::find_here();
hpx::future<std::vector<client_type>> f =
    hpx::new_<client_type[]>(here, num, ...);

// create multiple instances using the given distribution
// policy (here: hpx::binpacking_distribution_policy)
hpx::future<std::vector<client_type>> f = hpx::new_<client_type[]>(
    hpx::binpacking(hpx::find_all_localities()), num, ...);

Lightweight Control Objects provide synchrhonization for HPX applications. Most of them are familiar from other frameworks, but a few of them work in slightly special different ways adapted to HPX.

  1. future
  2. queue
  3. object_semaphore
  4. barrier
  5. and_gate
  6. barrier
  7. composable_guard - Composable guards operate in a manner similar to locks, but are applied only to asynchronous functions. The guard (or guards) is automatically locked at the beginning of a specified task and automatically unlocked at the end. Because guards are never added to an existing task's execution context, the calling of guards is freely composable and can never deadlock.

To call an application with a single guard, simply declare the guard and call run_guarded() with a function (task).

hpx::lcos::local::guard gu;
run_guarded(gu,task);

If a single method needs to run with multiple guards, use a guard set.

boost::shared<hpx::lcos::local::guard> gu1(new hpx::lcos::local::guard());
boost::shared<hpx::lcos::local::guard> gu2(new hpx::lcos::local::guard());
gs.add(*gu1);
gs.add(*gu2);
run_guarded(gs,task);

Guards use two atomic operations (which are not called repeatedly) to manage what they do, so overhead should be extremely low.

  1. conditional_trigger
  2. counting_semaphore
  3. dataflow
  4. event
  5. mutex
  6. once
  7. recursive_mutex
  8. spinlock
  9. spinlock_no_backoff
  10. trigger

Concurrency is about both decomposing and composing the program from the parts that work well individually and together. It is in the composition of connected and multicore components where today's C++ libraries are still lacking.

The functionality of std::future offers a partial solution. It allows for the separation of the initiation of an operation and the act of waiting for its result; however the act of waiting is synchronous. In communication-intensive code this act of waiting can be unpredictable, inefficient and simply frustrating. The example below illustrates a possible synchronous wait using futures.

#include <future>
using namespace std;
int main()
{
    future<int> f = async([]() { return 123; });
    int result = f.get(); // might block
}

For this reason, HPX implements a set of extensions to std::future (as proposed by N4313). This proposal introduces the following key asynchronous operations to hpx::future, hpx::shared_future, and hpx::async, which enhance and enrich these facilities.

Table 15. Facilities extending std::future

Facility

Description

hpx::future::then

In asynchronous programming, it is very common for one asynchronous operation, on completion, to invoke a second operation and pass data to it. The current C++ standard does not allow one to register a continuation to a future. With then, instead of waiting for the result, a continuation is "attached" to the asynchronous operation, which is invoked when the result is ready. Continuations registered using the then function will help to avoid blocking waits or wasting threads on polling, greatly improving the responsiveness and scalability of an application.

unwrapping constructor for hpx::future

In some scenarios, you might want to create a future that returns another future, resulting in nested futures. Although it is possible to write code to unwrap the outer future and retrieve the nested future and its result, such code is not easy to write because you must handle exceptions and it may cause a blocking call. Unwrapping can allow us to mitigate this problem by doing an asynchronous call to unwrap the outermost future.

hpx::future::is_ready

There are often situations where a get() call on a future may not be a blocking call, or is only a blocking call under certain circumstances. This function gives the ability to test for early completion and allows us to avoid associating a continuation, which needs to be scheduled with some non-trivial overhead and near-certain loss of cache efficiency.

hpx::make_ready_future

Some functions may know the value at the point of construction. In these cases the value is immediately available, but needs to be returned as a future. By using hpx::make_ready_future a future can be created which holds a pre-computed result in its shared state. In the current standard it is non-trivial to create a future directly from a value. First a promise must be created, then the promise is set, and lastly the future is retrieved from the promise. This can now be done with one operation.


The standard also omits the ability to compose multiple futures. This is a common pattern that is ubiquitous in other asynchronous frameworks and is absolutely necessary in order to make C++ a powerful asynchronous programming language. Not including these functions is synonymous to Boolean algebra without AND/OR.

In addition to the extensions proposed by N4313, HPX adds functions allowing to compose several futures in a more flexible way.

Table 16. Facilities for Composing hpx::futures

Facility

Description

Comment

hpx::when_any,
hpx::when_any_n

Asynchronously wait for at least one of multiple future or shared_future objects to finish.

N4313, ..._n versions are HPX only

hpx::wait_any,
hpx::wait_any_n

Synchronously wait for at least one of multiple future or shared_future objects to finish.

HPX only

hpx::when_all,
hpx::when_all_n

Asynchronously wait for all future and shared_future objects to finish.

N4313, ..._n versions are HPX only

hpx::wait_all,
hpx::wait_all_n

Synchronously wait for all future and shared_future objects to finish.

HPX only

hpx::when_some,
hpx::when_some_n

Asynchronously wait for multiple future and shared_future objects to finish.

HPX only

hpx::wait_some,
hpx::wait_some_n

Synchronously wait for multiple future and shared_future objects to finish.

HPX only

hpx::when_each,
hpx::when_each_n

Asynchronously wait for multiple future and shared_future objects to finish and call a function for each of the future objects as soon as it becomes ready.

HPX only

hpx::wait_each,
hpx::wait_each_n

Synchronously wait for multiple future and shared_future objects to finish and call a function for each of the future objects as soon as it becomes ready.

HPX only


In preparation for the upcoming C++ Standards we currently see several proposals targeting different facilities supporting parallel programming. HPX implements (and extends) some of those proposals. This is well aligned with our strategy to align the APIs exposed from HPX with current and future C++ Standards.

At this point, HPX implements several of the C++ Standardization working papers, most notably N4409 (Working Draft, Technical Specification for C++ Extensions for Parallelism), N4411 (Task Blocks), and N4406 (Parallel Algorithms Need Executors).

A parallel algorithm is a function template described by this document which is declared in the (inline) namespace hpx::parallel::v1.

[Note]Note

For compilers which do not support inline namespaces, all of the namespace v1 is imported into the namespace hpx::parallel. The effect is similar to what inline namespaces would do, namely all names defined in hpx::parallel::v1 are accessible from the namespace hpx::parallel as well.

All parallel algorithms are very similar in semantics to their sequential counterparts (as defined in the namespace std) with an additional formal template parameter named ExecutionPolicy. The execution policy is generally passed as the first argument to any of the parallel algorithms and describes the manner in which the execution of these algorithms may be parallelized and the manner in which they apply user-provided function objects.

The applications of function objects in parallel algorithms invoked with an execution policy object of type sequential_execution_policy or sequential_task_execution_policy execute in sequential order. For sequential_execution_policy the execution happens in the calling thread.

The applications of function objects in parallel algorithms invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

[Important]Important

It is the caller's responsibility to ensure correctness, for example that the invocation does not introduce data races or deadlocks.

The applications of function objects in parallel algorithms invoked with an execution policy of type parallel_vector_execution_policy is in HPX equivalent to the use of the execution policy parallel_execution_policy.

Algorithms invoked with an execution policy object of type execution_policy execute internally as if invoked with the contained execution policy object. No exception is thrown when an execution_policy contains an execution policy of type sequential_task_execution_policy or parallel_task_execution_policy (which normally turn the algorithm into its asynchronous version). In this case the execution is semantically equivalent to the case of passing a sequential_execution_policy or parallel_execution_policy contained in the execution_policy object respectively.

Parallel Exceptions

During the execution of a standard parallel algorithm, if temporary memory resources are required by any of the algorithms and no memory are available, the algorithm throws a std::bad_alloc exception.

During the execution of any of the parallel algorithms, if the application of a function object terminates with an uncaught exception, the behavior of the program is determined by the type of execution policy used to invoke the algorithm:

For example, the number of invocations of the user-provided function object in for_each is unspecified. When for_each is executed sequentially, only one exception will be contained in the exception_list object.

These guarantees imply that, unless the algorithm has failed to allocate memory and terminated with std::bad_alloc, all exceptions thrown during the execution of the algorithm are communicated to the caller. It is unspecified whether an algorithm implementation will "forge ahead" after encountering and capturing a user exception.

The algorithm may terminate with the std::bad_alloc exception even if one or more user-provided function objects have terminated with an exception. For example, this can happen when an algorithm fails to allocate memory while creating or adding elements to the exception_list object.

Parallel Algorithms

HPX provides implementations of the following parallel algorithms:

Table 17. Non-modifying Parallel Algorithms (In Header: <hpx/include/parallel_algortithm.hpp>)

Name

Description

In Header

hpx::parallel::all_of

Checks if a predicate is true for all of the elements in a range.

<hpx/include/parallel_all_any_none.hpp>

hpx::parallel::any_of

Checks if a predicate is true for any of the elements in a range.

<hpx/include/parallel_all_any_none.hpp>

hpx::parallel::none_of

Checks if a predicate is true for none of the elements in a range.

<hpx/include/parallel_all_any_none.hpp>

hpx::parallel::for_each

Applies a function to a range of elements.

<hpx/include/parallel_for_each.hpp>

hpx::parallel::for_each_n

Applies a function to a number of elements.

<hpx/include/parallel_for_each.hpp>

hpx::parallel::count

Returns the number of elements equal to a given value.

<hpx/include/parallel_count.hpp>

hpx::parallel::count_if

Returns the number of elements satisfying a specific criteria.

<hpx/include/parallel_count.hpp>

hpx::parallel::equal

Determines if two sets of elements are the same.

<hpx/include/parallel_equal.hpp>

hpx::parallel::mismatch

Finds the first position where two ranges differ.

<hpx/include/parallel_mismatch.hpp>

hpx::parallel::find

Finds the first element equal to a given value.

<hpx/include/parallel_find.hpp>

hpx::parallel::find_end

Finds the last sequence of elements in a certain range.

<hpx/include/parallel_find.hpp>

hpx::parallel::find_if

Finds the first element satisfying a specific criteria.

<hpx/include/parallel_find.hpp>

hpx::parallel::find_first_of

Searches for any one of a set of elements.

<hpx/include/parallel_find.hpp>

hpx::parallel::find_if_not

Finds the first element not satisfying a specific criteria.

<hpx/include/parallel_find.hpp>

hpx::parallel::adjacent_find

Computes the differences between adjacent elements in a range.

<hpx/include/parallel_adjacent_find.hpp>

hpx::parallel::lexicographical_compare

Checks if a range of values is lexicographically less than another range of values.

<hpx/include/parallel_lexicographical_compare.hpp>

hpx::parallel::search

Searches for a range of elements.

<hpx/include/parallel_search.hpp>

hpx::parallel::search_n

Searches for a number consecutive copies of an element in a range.

<hpx/include/parallel_search.hpp>

hpx::parallel::inclusive_scan

Does an inclusive parallel scan over a range of elements.

<hpx/include/parallel_scan.hpp>

hpx::parallel::exclusive_scan

Does an exclusive parallel scan over a range of elements.

<hpx/include/parallel_scan.hpp>


Table 18. Modifying Parallel Algorithms (In Header: <hpx/include/parallel_algortithm.hpp>)

Name

Description

In Header

hpx::parallel::copy

Copies a range of elements to a new location.

<hpx/include/parallel_copy.hpp>

hpx::parallel::copy_n

Copies a number of elements to a new location.

<hpx/include/parallel_copy.hpp>

hpx::parallel::copy_if

Copies the elements from a range to a new location for which the given predicate is true.

<hpx/include/parallel_copy.hpp>

hpx::parallel::move

Moves a range of elements to a new location.

<hpx/include/parallel_fill.hpp>

hpx::parallel::fill

Assigns a range of elements a certain value.

<hpx/include/parallel_fill.hpp>

hpx::parallel::fill_n

Assigns a value to a number of elements.

<hpx/include/parallel_fill.hpp>

hpx::parallel::transform

Applies a function to a range of elements.

<hpx/include/parallel_transform.hpp>

hpx::parallel::generate

Saves the result of a function in a range.

<hpx/include/parallel_generate.hpp>

hpx::parallel::generate_n

Saves the result of N applications of a function.

<hpx/include/parallel_generate.hpp>

hpx::parallel::remove_copy

Copies the elements from a range to a new location that are not equal to the given value.

<hpx/include/parallel_remove_copy.hpp>

hpx::parallel::remove_copy_if

Copies the elements from a range to a new location for which the given predicate is false.

<hpx/include/parallel_remove_copy.hpp>

hpx::parallel::replace

Replaces all values satisfying specific criteria with another value.

<hpx/include/parallel_replace.hpp>

hpx::parallel::replace_if

Replaces all values satisfying specific criteria with another value.

<hpx/include/parallel_replace.hpp>

hpx::parallel::replace_copy

Copies a range, replacing elements satisfying specific criteria with another value.

<hpx/include/parallel_replace.hpp>

hpx::parallel::replace_copy_if

Copies a range, replacing elements satisfying specific criteria with another value.

<hpx/include/parallel_replace.hpp>

hpx::parallel::reverse

Reverses the order elements in a range.

<hpx/include/parallel_reverse.hpp>

hpx::parallel::reverse_copy

Creates a copy of a range that is reversed.

<hpx/include/parallel_reverse.hpp>

hpx::parallel::rotate

Rotates the order of elements in a range.

<hpx/include/parallel_rotate.hpp>

hpx::parallel::rotate_copy

Copies and rotates a range of elements.

<hpx/include/parallel_rotate.hpp>

hpx::parallel::swap_ranges

Swaps two ranges of elements.

<hpx/include/parallel_swap_ranges.hpp>


Table 19. Set operations on sorted sequences(In Header: <hpx/include/parallel_algortithm.hpp>)

Name

Description

In Header

hpx::parallel::includes

Returns true if one set is a subset of another.

<hpx/include/parallel_set_operations.hpp>

hpx::parallel::set_difference

Computes the difference between two sets.

<hpx/include/parallel_set_operations.hpp>

hpx::parallel::set_intersection

Computes the intersection of two sets.

<hpx/include/parallel_set_operations.hpp>

hpx::parallel::set_symmetric_difference

Computes the symmetric difference between two sets.

<hpx/include/parallel_set_operations.hpp>

hpx::parallel::set_union

Computes the union of two sets.

<hpx/include/parallel_set_operations.hpp>


Table 20. Minimum/maximum operations (In Header: <hpx/include/parallel_algortithm.hpp>)

Name

Description

In Header

hpx::parallel::max_element

Returns the largest element in a range.

<hpx/include/parallel_minmax.hpp>

hpx::parallel::min_element

Returns the smallest element in a range.

<hpx/include/parallel_minmax.hpp>

hpx::parallel::minmax_element

Returns the smallest and the largest element in a range.

<hpx/include/parallel_minmax.hpp>


Table 21. Sorting Operations (In Header: <hpx/include/parallel_algorithm.hpp>)

Name

Description

In Header

hpx::parallel::is_sorted

Returns true if each element in a range is sorted

<hpx/include/parallel_is_sorted.hpp>

hpx::parallel::is_sorted_until

Returns the first unsorted element

<hpx/include/parallel_is_sorted.hpp>

hpx::parallel::is_partitioned

Returns true if each true element for a predicate precedes the false elements in a range

<hpx/include/parallel_is_partitioned.hpp>

hpx::parallel::sort

Sorts the elements in a range

<hpx/include/parallel_sort.hpp>

hpx::parallel::sort_by_key

Sorts one range of data using keys supplied in another range

<hpx/include/parallel_sort.hpp>


Table 22. Numeric Parallel Algorithms (In Header: <hpx/include/parallel_numeric.hpp>)

Name

Description

In Header

hpx::parallel::adjacent_difference

Calculates the difference between each element in an input range and the preceeding elemnt.

<hpx/include/parallel_adjacent_difference.hpp>

hpx::parallel::inner_product

Accumulates the inner products of two input ranges.

<hpx/include/parallel_inner_product.hpp>

hpx::parallel::reduce

Sums up a range of elements.

<hpx/include/parallel_reduce.hpp>

hpx::parallel::transform_reduce

Sums up a range of elements after applying a function.

<hpx/include/parallel_transform_reduce.hpp>

hpx::parallel::transform_inclusive_scan

Does an inclusive parallel scan over a range of elements after applying a function.

<hpx/include/parallel_scan.hpp>

hpx::parallel::transform_exclusive_scan

Does an exclusive parallel scan over a range of elements after applying a function.

<hpx/include/parallel_scan.hpp>


Table 23. Dynamic Memory Management (In Header: <hpx/include/parallel_memory.hpp>)

Name

Description

In Header

hpx::parallel::uninitialized_copy

Copies a range of objects to an uninitialized area of memory.

<hpx/include/parallel_uninitialized_copy.hpp>

hpx::parallel::uninitialized_copy_n

Copies a number of objects to an uninitialized area of memory.

<hpx/include/parallel_uninitialized_copy.hpp>

hpx::parallel::uninitialized_fill

Copies an object to an uninitialized area of memory.

<hpx/include/parallel_uninitialized_fill.hpp>

hpx::parallel::uninitialized_fill_n

Copies an object to an uninitialized area of memory.

<hpx/include/parallel_uninitialized_fill.hpp>


The existing Version 1 of the Parallelism TS (N4409) exposes parallel execution to the programmer in the form of standard algorithms that accept execution policies. A companion executor facility both provides a suitable substrate for implementing these algorithms in a standard way and provide a mechanism for exercising programmatic control over where parallel work should be executed.

The algorithms and execution policies specified by the Parallelism TS are designed to permit implementation on the broadest range of platforms. In addition to preemptive thread pools common on some platforms, implementations of these algorithms may want to take advantage of a number of mechanisms for parallel execution, including cooperative fibers, GPU threads, and SIMD vector units, among others. This diversity of possible execution resources strongly suggests that a suitable abstraction encapsulating the details of how work is created across diverse platforms would be of significant value to parallel algorithm implementations. Suitably defined executors provide just such a facility.

An executor is an object responsible for creating execution agents on which work is performed, thus abstracting the (potentially platform-specific) mechanisms for launching work. To accommodate the goals of the Parallelism TS, whose algorithms aim to support the broadest range of possible platforms, the requirements that all executors are expected to fulfill are small. They are also be consistent with a broad range of execution semantics, including preemptive threads, cooperative fibers, GPU threads, and SIMD vector units, among others.

The executors implemented by HPX are aligned with the interfaces proposed by N4406 (Parallel Algorithms Need Executors).

Executors are modular components for requisitioning execution agents. During parallel algorithm execution, execution policies generate execution agents by requesting their creation from an associated executor. Rather than focusing on asynchronous task queueing, our complementary treatment of executors casts them as modular components for invoking functions over the points of an index space. We believe that executors may be conceived of as allocators for execution agents and our interface's design reflects this analogy. The process of requesting agents from an executor is mediated via the hpx::parallel::executor_traits API, which is analogous to the interaction between containers and allocator_traits.

With executor_traits, clients manipulate all types of executors uniformly:

executor_traits<my_executor_t>::execute(my_executor,
    [](size_t i){ // perform task i },
    range(0, n));

This call synchronously creates a group of invocations of the given function, where each individual invocation within the group is identified by a unique integer i in [0, n). Other functions in the interface exist to create groups of invocations asynchronously and support the special case of creating a singleton group, resulting in four different combinations.

Though this interface appears to require executor authors to implement four different basic operations, there is really only one requirement: async_execute(). In practice, the other operations may be defined in terms of this single basic primitive. However, some executors will naturally specialize all four operations for maximum efficiency.

For maximum implementation flexibility, executor_traits does not require executors to implement a particular exception reporting mechanism. Executors may choose whether or not to report exceptions, and if so, in what manner they are communicated back to the caller. However, all executors in HPX report exceptions in a manner consistent with the behavior of execution policies described by the Parallelism TS, where multiple exceptions are collected into an exception_list. This list is reported through async_execute()'s returned future, or thrown directly by execute().

In HPX we have implemented the following executor types:

  • hpx::parallel::sequential_executor: creates groups of sequential execution agents which execute in the calling thread. The sequential order is given by the lexicographical order of indices in the index space.
  • hpx::parallel::parallel_executor: creates groups of parallel execution agents which execute in threads implicitly created by the executor. This executor uses a given launch policy.
  • hpx::parallel::service_executor: creates groups of parallel execution agents which execute in one of the kernel threads associated with a given pool category (I/O, parcel, or timer pool, or on the main thread of the application).
  • hpx::parallel::local_priority_queue_executor, hpx::parallel::local_queue_executor hpx::parallel::static_priority_queue_executor create executors on top of the corresponding HPX schedulers.
  • hpx::parallel::distribution_policy_executor creates executors using any of the existing distribution policies (like `hpx::components::colocating_distribution_policy et.al.).

Executors as described in the previous section add a powerful customization capability to any facility which exposes management of parallel execution. However, sometimes it is necessary to be able to customize certain parameters of the execution as well. In HPX we introduce the notion of execution parameters and execution parameter traits. At this point, the only parameter which can be customized is the size of the chunks of work executed on a single HPX-thread (such as the number of loop iterations combined to run as a single task).

An executor parameter object is responsible for exposing the calculation of the size of the chunks scheduled. It abstracts the (potentiall platform-specific) algorithms of determining those chunks sizes.

The way executor parameters are implemented is aligned with the way executors are implemented. All functionalities of concrete executior parameter types are exposed and accessible through a corresponding hpx::parallel::executor_parameter_traits type.

With executor_parameter_traits, clients access all types of executor parameters uniformly:

std::size_t chunk_size =
    executor_parameter_traits<my_parameter_t>::get_chunk_size(my_parameter,
        my_executor, [](){ return 0; }, num_tasks);

This call synchronously retrieves the size of a single chunk of loop iterations (or similar) to combine for execution on a single HPX-thread if the overall number of tasks to schedule is given by num_tasks. The lambda function exposes a means of test-probing the execution of a single iteration for performance measurment purposes (the execution parameter type might dynamically determine the execution time of one or more tasks in order to calculate the chunk size, see hpx::parallel::auto_chunk_size for an example of such a executor parameter type).

Other functions in the interface exist to discover whether a executor parameter type should be invoked once (i.e. returns a static chunk size, see hpx::parallel::static_chunk_size) or whether it should be invoked for each scheduled chunk of work (i.e. it returns a variable chunk size, for an example, see hpx::parallel::guided_chunk_size).

Though this interface appears to require executor parameter type authors to implement all different basic operations, there is really none required. In practice, all operations have sensible defaults. However, some executor parameter types will naturally specialize all operations for maximum efficiency.

In HPX we have implemented the following executor parameter types:

  • hpx::parallel::auto_chunk_size: Loop iterations are divided into pieces and then assigned to threads. The number of loop iterations combined is determined based on measurements of how long the execution of 1% of the overall number of iterations takes. This executor parameters type makes sure that as many loop iterations are combined as necessary to run for the amount of time specified.
  • hpx::parallel::static_chunk_size: Loop iterations are divided into pieces of a given size and then assigned to threads. If the size is not specified, the iterations are evenly (if possible) divided contiguously among the threads. This executor parameters type is equivalent to OpenMP's STATIC scheduling directive.
  • hpx::parallel::dynamic_chunk_size: Loop iterations are divided into pieces of a given size and then dynamically scheduled among the cores; when an core finishes one chunk, it is dynamically assigned another If the size is not specified, the default chunk size is 1. This executor parameters type is equivalent to OpenMP's DYNAMIC scheduling directive.
  • hpx::parallel::guided_chunk_size: Iterations are dynamically assigned to cores in blocks as cores request them until no blocks remain to be assigned. Similar to dynamic_chunk_size except that the block size decreases each time a number of loop iterations is given to a thread. The size of the initial block is proportional to number_of_iterations / number_of_cores. Subsequent blocks are proportional to number_of_iterations_remaining / number_of_cores. The optional chunk size parameter defines the minimum block size. The default minimal chunk size is 1. This executor parameters type is equivalent to OpenMP's GUIDED scheduling directive.

The define_task_block, run and the wait functions implemented based on N4411 are based on the task_block concept that is a part of the common subset of the Microsoft Parallel Patterns Library (PPL) and the Intel Threading Building Blocks (TBB) libraries.

This implementations adopts a simpler syntax than exposed by those libraries - one that is influenced by language-based concepts such as spawn and sync from Cilk++ and async and finish from X10. It improves on existing practice in the following ways:

  • The exception handling model is simplified and more consistent with normal C++ exceptions.
  • Most violations of strict fork-join parallelism can be enforced at compile time (with compiler assistance, in some cases).
  • The syntax allows scheduling approaches other than child stealing.

Consider an example of a parallel traversal of a tree, where a user-provided function compute is applied to each node of the tree, returning the sum of the results:

template <typename Func>
int traverse(node& n, Func && compute)
{
    int left = 0, right = 0;
    define_task_block(
        [&](task_block<>& tr) {
            if (n.left)
                tr.run([&] { left = traverse(*n.left, compute); });
            if (n.right)
                tr.run([&] { right = traverse(*n.right, compute); });
        });

    return compute(n) + left + right;
}

The example above demonstrates the use of two of the functions, define_task_block and the run member function of a task_block.

The task_block function delineates a region in a program code potentially containing invocations of threads spawned by the run member function of the task_block class. The run function spawns an HPX thread, a unit of work that is allowed to execute in parallel with respect to the caller. Any parallel tasks spawned by run within the task block are joined back to a single thread of execution at the end of the define_task_block. run takes a user-provided function object f and starts it asynchronously - i.e. it may return before the execution of f completes. The HPX scheduler may choose to run f immediately or delay running f until compute resources become available.

A task_block can be constructed only by define_task_block because it has no public constructors. Thus, run can be invoked (directly or indirectly) only from a user-provided function passed to define_task_block:

void g();

void f(task_block<>& tr)
{
    tr.run(g);          // OK, invoked from within task_block in h
}

void h()
{
    define_task_block(f);
}

int main()
{
    task_block<> tr;    // Error: no public constructor
    tr.run(g);          // No way to call run outside of a define_task_block
    return 0;
}
Using Execution Policies with Task Blocks

In HPX we implemented some extensions for task_block beyond the actual standards porposal N4411. The main addition is that a task_block can be invoked with a execution policy as its first argument, very similar to the parallel algorithms.

An execution policy is an object that expresses the requirements on the ordering of functions invoked as a consequence of the invocation of a task block. Enabling passing an execution policy to define_task_block gives the user control over the amount of parallelism employed by the created task_block. In the following example the use of an explicit par execution policy makes the user's intent explicit:

template <typename Func>
int traverse(node *n, Func&& compute)
{
    int left = 0, right = 0;

    define_task_block(
        par,                // parallel_execution_policy
        [&](task_block<>& tb) {
            if (n->left)
                tb.run([&] { left = traverse(n->left, compute); });
            if (n->right)
                tb.run([&] { right = traverse(n->right, compute); });
        });

    return compute(n) + left + right;
}

This also causes the hpx::parallel::task_block object to be a template in our implementation. The template argment is the type of the execution policy used to create the task block. The template argment defaults to hpx::parallel::parallel_execution_policy.

HPX still supports calling hpx::parallel::define_task_block without an explicit execution policy. In this case the task block will run using the hpx::parallel::parallel_execution_policy.

HPX also adds the ability to access the execution policy which was used to create a given task_block.

Using Executors to run Tasks

Often, we want to be able to not only define an execution policy to use by default for all spawned tasks inside the task block, but in addition to customize the execution context for one of the tasks executed by task_block::run. Adding an optionally passed executor instance to that function enables this use case:

template <typename Func>
int traverse(node *n, Func&& compute)
{
    int left = 0, right = 0;

    define_task_block(
        par,                // parallel_execution_policy
        [&](auto& tb) {
            if (n->left)
            {
                // use explicitly specified executor to run this task
                tb.run(my_executor(), [&] { left = traverse(n->left, compute); });
            }
            if (n->right)
            {
                // use the executor associated with the par execution policy
                tb.run([&] { right = traverse(n->right, compute); });
            }
        });

    return compute(n) + left + right;
}

HPX still supports calling hpx::parallel::task_block::run without an explicit executor object. In this case the task will be run using the executor assiciated with the execution policy which was used to call hpx::parallel::define_task_block.

Like in any other asynchronous invocation scheme it is important to be able to handle error conditions occurring while the asynchronous (and possibly remote) operation is executed. In HPX all error handling is based on standard C++ exception handling. Any exception thrown during the execution of an asynchronous operation will be transferred back to the original invocation locality, where it is rethrown during synchronization with the calling thread.

The source code for this example can be found here: error_handling.cpp.

Working with Exceptions

For the following description we assume that the function raise_exception() is executed by invoking the plain action raise_exception_type:

void raise_exception()
{
    HPX_THROW_EXCEPTION(hpx::no_success, "raise_exception", "simulated error");
}
HPX_PLAIN_ACTION(raise_exception, raise_exception_action);

The exception is thrown using the macro HPX_THROW_EXCEPTION. The type of the thrown exception is hpx::exception. This associates additional diagnostic information with the exception, such as file name and line number, locality id and thread id, and stack backtrace from the point where the exception was thrown.

Any exception thrown during the execution of an action is transferred back to the (asynchronous) invocation site. It will be rethrown in this context when the calling thread tries to wait for the result of the action by invoking either future<>::get() or the synchronous action invocation wrapper as shown here:

hpx::cout << "Error reporting using exceptions\n";
try {
    // invoke raise_exception() which throws an exception
    raise_exception_action do_it;
    do_it(hpx::find_here());
}
catch (hpx::exception const& e) {
    // Print just the essential error information.
    hpx::cout << "caught exception: " << e.what() << "\n\n";

    // Print all of the available diagnostic information as stored with
    // the exception.
    hpx::cout << "diagnostic information:"
        << hpx::diagnostic_information(e) << "\n";
}
hpx::cout << hpx::flush;
[Note]Note

The exception is transferred back to the invocation site even if it is executed on a different locality.

Additionally, this example demonstrates how an exception thrown by an (possibly remote) action can be handled. It shows the use of hpx::diagnostic_information() which retrieves all available diagnostic information from the exception as a formatted string. This includes, for instance, the name of the source file and line number, the sequence number of the OS-thread and the HPX-thread id, the locality id and the stack backtrace of the point where the original exception was thrown.

Under certain circumstances it is desireable to output only some of the diagnostics, or to output those using different formatting. For this case, HPX exposes a set of lower level functions as demonstrated in the following code snippet:

hpx::cout << "Detailed error reporting using exceptions\n";
try {
    // Invoke raise_exception() which throws an exception.
    raise_exception_action do_it;
    do_it(hpx::find_here());
}
catch (hpx::exception const& e) {
    // Print the elements of the diagnostic information separately.
    hpx::cout << "{what}: "        << hpx::get_error_what(e) << "\n";
    hpx::cout << "{locality-id}: " << hpx::get_error_locality_id(e) << "\n";
    hpx::cout << "{hostname}: "    << hpx::get_error_host_name(e) << "\n";
    hpx::cout << "{pid}: "         << hpx::get_error_process_id(e) << "\n";
    hpx::cout << "{function}: "    << hpx::get_error_function_name(e) << "\n";
    hpx::cout << "{file}: "        << hpx::get_error_file_name(e) << "\n";
    hpx::cout << "{line}: "        << hpx::get_error_line_number(e) << "\n";
    hpx::cout << "{os-thread}: "   << hpx::get_error_os_thread(e) << "\n";
    hpx::cout << "{thread-id}: "   << std::hex << hpx::get_error_thread_id(e)
        << "\n";
    hpx::cout << "{thread-description}: "
        << hpx::get_error_thread_description(e) << "\n";
    hpx::cout << "{state}: "       << std::hex << hpx::get_error_state(e)
        << "\n";
    hpx::cout << "{stack-trace}: " << hpx::get_error_backtrace(e) << "\n";
    hpx::cout << "{env}: "         << hpx::get_error_env(e) << "\n";
}
hpx::cout << hpx::flush;
Working with Error Codes

Most of the API functions exposed by HPX can be invoked in two different modes. By default those will throw an exception on error as described above. However, sometimes it is desireable not to throw an exception in case of an error condition. In this case an object instance of the hpx::error_code type can be passed as the last argument to the API function. In case of an error the error condition will be returned in that hpx::error_code instance. The following example demonstrates extracting the full diagnostic information without exception handling:

hpx::cout << "Error reporting using error code\n";

// Create a new error_code instance.
hpx::error_code ec;

// If an instance of an error_code is passed as the last argument while
// invoking the action, the function will not throw in case of an error
// but store the error information in this error_code instance instead.
raise_exception_action do_it;
do_it(hpx::find_here(), ec);

if (ec) {
    // Print just the essential error information.
    hpx::cout << "returned error: " << ec.get_message() << "\n";

    // Print all of the available diagnostic information as stored with
    // the exception.
    hpx::cout << "diagnostic information:"
        << hpx::diagnostic_information(ec) << "\n";
}

hpx::cout << hpx::flush;
[Note]Note

The error information is transferred back to the invocation site even if it is executed on a different locality.

This example show how an error can be handled without having to resolve to exceptions and that the returned hpx::error_code instance can be used in a very similar way as the hpx::exception type above. Simply pass it to the hpx::diagnostic_information() which retrieves all available diagnostic information from the error code instance as a formatted string.

As for handling exceptions, when working with error codes, under certain circumstances it is desireable to output only some of the diagnostics, or to output those using different formatting. For this case, HPX exposes a set of lower level functions usable with error codes as demonstrated in the following code snippet:

hpx::cout << "Detailed error reporting using error code\n";

// Create a new error_code instance.
hpx::error_code ec;

// If an instance of an error_code is passed as the last argument while
// invoking the action, the function will not throw in case of an error
// but store the error information in this error_code instance instead.
raise_exception_action do_it;
do_it(hpx::find_here(), ec);

if (ec) {
    // Print the elements of the diagnostic information separately.
    hpx::cout << "{what}: "        << hpx::get_error_what(ec) << "\n";
    hpx::cout << "{locality-id}: " << hpx::get_error_locality_id(ec) << "\n";
    hpx::cout << "{hostname}: "    << hpx::get_error_host_name(ec) << "\n";
    hpx::cout << "{pid}: "         << hpx::get_error_process_id(ec) << "\n";
    hpx::cout << "{function}: "    << hpx::get_error_function_name(ec)
        << "\n";
    hpx::cout << "{file}: "        << hpx::get_error_file_name(ec) << "\n";
    hpx::cout << "{line}: "        << hpx::get_error_line_number(ec) << "\n";
    hpx::cout << "{os-thread}: "   << hpx::get_error_os_thread(ec) << "\n";
    hpx::cout << "{thread-id}: "   << std::hex
        << hpx::get_error_thread_id(ec) << "\n";
    hpx::cout << "{thread-description}: "
        << hpx::get_error_thread_description(ec) << "\n\n";
    hpx::cout << "{state}: "       << std::hex << hpx::get_error_state(ec)
        << "\n";
    hpx::cout << "{stack-trace}: " << hpx::get_error_backtrace(ec) << "\n";
    hpx::cout << "{env}: "         << hpx::get_error_env(ec) << "\n";
}

hpx::cout << hpx::flush;

For more information please refer to the documentation of hpx::get_error_what, hpx::get_error_locality_id, hpx::get_error_host_name, hpx::get_error_process_id, hpx::get_error_function_name, hpx::get_error_file_name, hpx::get_error_line_number, hpx::get_error_os_thread, hpx::get_error_thread_id, hpx::get_error_thread_description, hpx::get_error_backtrace, hpx::get_error_env, and hpx::get_error_state.

Lightweight error Codes

Sometimes it is not desireable to collect all the ambient information about the error at the point where it happened as this might impose too much overhead for simple scenarious. In this case, HPX provides a lightweight error code facility which will hold the error code only. The following snippet demonstrates its use:

hpx::cout << "Error reporting using an lightweight error code\n";

// Create a new error_code instance.
hpx::error_code ec(hpx::lightweight);

// If an instance of an error_code is passed as the last argument while
// invoking the action, the function will not throw in case of an error
// but store the error information in this error_code instance instead.
raise_exception_action do_it;
do_it(hpx::find_here(), ec);

if (ec) {
    // Print just the essential error information.
    hpx::cout << "returned error: " << ec.get_message() << "\n";

    // Print all of the available diagnostic information as stored with
    // the exception.
    hpx::cout << "error code:" << ec.value() << "\n";
}

hpx::cout << hpx::flush;

All functions which retrieve other diagnostic elements from the hpx::error_code will fail if called with a lightweight error_code instance.

Performance Counters in HPX are used to provide information as to how well the runtime system or an application is performing. The counter data can help determine system bottlenecks and fine-tune system and application performance. The HPX runtime system, its networking, and other layers provide counter data that an application can consume to provide users with information of how well the application is performing.

Applications can also use counter data to determine how much system resources to consume. For example, an application that transfers data over the network could consume counter data from a network switch to determine how much data to transfer without competing for network bandwidth with other network traffic. The application could use the counter data to adjust its transfer rate as the bandwidth usage from other network traffic increases or decreases.

Performance Counters are HPX parallel processes which expose a predefined interface. HPX exposes special API functions that allow one to create, manage, read the counter data, and release instances of Performance Counters. Performance Counter instances are accesssed by name, and these names have a predefined structure which is described in the section Performance Counter Names. The advantage of this is that any Performance Counter can be accessed remotely (from a different locality) or locally (from the same locality). Moreover, since all counters expose their data using the same API, any code consuming counter data can be utilized to access arbitrary system information with minimal effort.

Counter data may be accesssed in real time. More information about how to consume counter data can be found in the section Consuming Performance Counters.

All HPX applications provide command line options related to performance counters, such as the ability to list available counter types, or periodically query specific counters to be printed to the screen or save them in a file. For more information, please refer to the section HPX Command Line Options.

All Performance Counter instances have a name uniquely identifying this instance. This name can be used to access the counter, retrieve all related meta data, and to query the counter data (as described in the section Consuming Performance Counters). Counter names are strings with a predefined structure. The general form of a countername is:

/objectname{full_instancename}/countername@parameters

where full_instancename could be either another (full) counter name or a string formatted as:

parentinstancename#parentindex/instancename#instanceindex

Each separate part of a countername (e.g. objectname, countername, parentinstancename, instancename, and parameters) should start with a letter ('a'...'z', 'A'...'Z') or an underscore character ('_'), optionally followed by letters, digits ('0'...'9'), hyphen ('-'), or underscore characters. Whitespace is not allowed inside a counter name. The characters '/', '{', '}', '#', and '@' have a special meaning and are used to delimit the different parts of the counter name.

The parts parentinstanceindex and instanceindex are integers. If an index is not specified HPX will assume a default of -1.

Two Simple Examples

An instance for a well formed (and meaningful) simple counter name would be:

/threads{locality#0/total}/count/cumulative

This counter returns the current cumulative number of executed (retired) HPX-threads for the locality 0. The counter type of this counter is /threads/count/cumulative and the full instance name is locality#0/total (highlighted for readability). This counter type does not require an instanceindex or parameters to be specified.

In this case, the parentindex (the '0') designates the locality for which the counter instance is created. The counter will return the number of HPX-threads retired on that particular locality.

Another example for a well formed (aggregate) counter name is:

/statistics{/threads{locality#0/total}/count/cumulative}/average@500

This counter takes the simple counter from the first example, samples its values every 500 milliseconds, and returns the average of the value samples whenever it is queried. The counter type of this counter is /statistics/average and the instance name is the full name of the counter for which the values have to be averaged. In this case, the parameters (the '500') specify the sampling interval for the averaging to take place (in milliseconds).

Performance Counter Types

Every Performance Counter belongs to a specific Performance Counter type which classifies the counters into groups of common semantics. The type of a counter is identified by the objectname and the countername parts of the name.

/objectname/countername

At application start, HPX will register all available counter types on each of the localities. These counter types are held in a special Performance Counter registration database which can be later used to retrieve the meta data related to a counter type and to create counter instances based on a given counter instance name.

Performance Counter Instances

The full_instancename distinguishes different counter instances of the same counter type. The formatting of the full_instancename depends on the counter type. There are two types of counters: simple counters which usually generate the counter values based on direct measurements, and aggregate counters which take another counter and transform its values before generating their own counter values. An example for a simple counter is given above: counting retired HPX-threads. An aggreagate counter is shown as an example above as well: calculating the average of the underlying counter values sampled at constant time intervals.

While simple counters use instance names formatted as parentinstancename#parentindex/instancename#instanceindex, most aggregate counters have the full counter name of the embedded counter as its instance name.

Not all simple counter types require specifying all 4 elements of a full counter instance name, some of the parts (parentinstancename, parentindex, instancename, and instanceindex) are optional for specific counters. Please refer to the documentation of a particular counter for more information about the formatting requirements for the name of this counter (see Existing Performance Counters).

The parameters are used to pass additional information to a counter at creation time. They are optional and they fully depend on the concrete counter. Even if a specific counter type allows additional parameters to be given, those usually are not required as sensible defaults will be chosen. Please refer to the documentation of a particular counter for more information about what parameters are supported, how to specify them, and what default values are assumed (see also Existing Performance Counters).

Every locality of an application exposes its own set of Performance Counter types and Performance Counter instances. The set of exposed counters is determinded dynamically at application start based on the execution environment of the application. For instance, this set is influenced by the current hardware environment for the locality (such as whether the locality has access to accelerators), and the software environment of the application (such as the number of OS-threads used to execute HPX-threads).

Using Wildcards in Performance Counter Names

It is possible to use wildcard characters when specifying performance counter names. Performance counter names can contain 2 types of wildcard characters:

  • Wildcard characters in the performance counter type
  • Wildcard characters in the performance counter instance name

Wildcard character have a meaning which is very close to usual file name wildcard matching rules implemented by common shells (like bash).

Table 24. Wildcard characters in the performance counter type

Wildcard

Description

*

This wildchard character matches any number (zero or more) of arbitrary characters.

?

This wildchard character matches any single arbitrary character.

[...]

This wildchard character matches any single character from the list of specified within the square brackets.


Table 25. Wildcard characters in the performance counter instance name

Wildcard

Description

*

This wildchard character matches any locality or any thread, depending on whether it is used for locality#* or worker-thread#*. No other wildcards are allowed in counter instance names.


You can consume performance data using either the command line interface or via the HPX application or the HPX API. The command line interface is easier to use, but it is less flexible and does not allow one to adjust the behaviour of your application at runtime. The command line interface provides a convenience abstraction but simplified abstraction for querying and logging performance counter data for a set of performance counters.

HPX provides a set of predefined command line options for every application which uses hpx::init for its initialization. While there are much more command line options available (see HPX Command Line Options), the set of options related to Performance Counters allow one to list existing counters, query existing counters once at application termination or repeatedly after a constant time interval.

The following table summarizes the available command line options:

Table 26. HPX Command Line Options Related to Performance Counters

Command line option

Description

--hpx:print-counter

print the specified performance counter either repeatedly or before shutting down the system (see option --hpx:print-counter-interval)

--hpx:print-counter-interval

print the performance counter(s) specified with --hpx:print-counter repeatedly after the time interval (specified in milliseconds) (default: 0, which means print once at shutdown)

--hpx:print-counter-destination

print the performance counter(s) specified with --hpx:print-counter to the given file (default: console)

--hpx:list-counters

list the names of all registered performance counters

--hpx:list-counter-infos

list the description of all registered performance counters

--hpx:print-counter-format

print the performance counter(s) specified with --hpx:print-counter, possible formats in csv format with header or without any header (see option --hpx:no-csv-header) values: 'csv' (prints counter values in CSV format with full names as header) 'csv-short' (prints counter values in CSV format with shortnames provided with --hpx:print-counter as --hpx:print-counter shortname,full-countername)

--hpx:no-csv-header

print the performance counter(s) specified with --hpx:print-counter and csv or csv-short format specified with --hpx:print-counter-format without header


While the options --hpx:list-counters and --hpx:list-counter-infos give a short listing of all available counters, the full documentation for those can be found in the section Existing Performance Counters.

A Simple Example

All of the commandline options mentioned above can be for instance tested using the hello_world example.

Listing all available counters (hello_world --hpx:list-counters) yields:

List of available counter instances
(replace '`*`' below with the appropriate sequence number)
-------------------------------------------------------------------------
/agas/count/allocate
/agas/count/bind
/agas/count/bind_gid
/agas/count/bind_name
...
/threads{locality#*/allocator#*}/count/objects
/threads{locality#*/total}/count/stack-recycles
/threads{locality#*/total}/idle-rate
/threads{locality#*/worker-thread#*}/idle-rate

Providing more information about all available counters (hello_world --hpx:list-counter-infos) yields:

Information about available counter instances
(replace * below with the appropriate sequence number)
------------------------------------------------------------------------------
fullname: /agas/count/allocate
helptext: returns the number of invocations of the AGAS service 'allocate'
type:     counter_raw
version:  1.0.0
------------------------------------------------------------------------------

------------------------------------------------------------------------------
fullname: /agas/count/bind
helptext: returns the number of invocations of the AGAS service 'bind'
type:     counter_raw
version:  1.0.0
------------------------------------------------------------------------------

------------------------------------------------------------------------------
fullname: /agas/count/bind_gid
helptext: returns the number of invocations of the AGAS service 'bind_gid'
type:     counter_raw
version:  1.0.0
------------------------------------------------------------------------------

...

This command will not only list the counter names but also a short description of the data exposed by this counter.

[Note]Note

The list of available counters may differ depending on the concrete execution environment (hardware or software) of your application.

Requesting the counter data for one or more performance counters can be achieved by invoking hello_world with a list of counter names:

hello_world \
    --hpx:print-counter=/threads{locality#0/total}/count/cumulative \
    --hpx:print-counter=/agas{locality#0/total}/count/bind

which yields for instance:

hello world from OS-thread 0 on locality 0
/threads{locality#0/total}/count/cumulative,1,0.212527,[s],33
/agas{locality#0/total}/count/bind,1,0.212790,[s],11

The first line is the normal output generated by hello_world and has no relation to the counter data listed. The last two lines contain the counter data as gathered at application shutdown. These lines have 6 fields, the counter name, the sequence number of the counter invocation, the time stamp at which this information has been sampled, the unit of measure for the time stamp, the actual counter value, and an optional unit of measure for the counter value.

Requesting to query the counter data once after a constant time interval with this command line

hello_world \
    --hpx:print-counter=/threads{locality#0/total}/count/cumulative \
    --hpx:print-counter=/agas{locality#0/total}/count/bind \
    --hpx:print-counter-interval=20

yields for instance (leaving off the actual console output of the hello_world example for brevity):

threads{locality#0/total}/count/cumulative,1,0.002409[s],22
agas{locality#0/total}/count/bind,1,0.002542[s],9
threads{locality#0/total}/count/cumulative,2,0.023002[s],41
agas{locality#0/total}/count/bind,2,0.023557[s],10
threads{locality#0/total}/count/cumulative,3,0.037514[s],46
agas{locality#0/total}/count/bind,3,0.038679[s],10

The command --hpx:print-counter-destination=<file> will redirect all counter data gathered to the specified file name, which avoids cluttering the console output of your application.

The command line option --hpx:print-counter supports using a limited set of wildcards for a (very limited) set of use cases. In particular, all occurences of #* as in locality#* and in worker-thread#* will be automatically expanded to the proper set of performance counter names representing the actual environment for the executed program. For instance, if your program is utilizing 4 worker threads for the execution of HPX threads (see command line option --hpx:threads) the following command line

hello_world \
    --hpx:threads=4 \
    --hpx:print-counter=/threads{locality#0/worker-thread#*}/count/cumulative

will print the value of the performance counters monitoring each of the worker threads:

hello world from OS-thread 1 on locality 0
hello world from OS-thread 0 on locality 0
hello world from OS-thread 3 on locality 0
hello world from OS-thread 2 on locality 0
/threads{locality#0/worker-thread#0}/count/cumulative,1,0.0025214[s],27
/threads{locality#0/worker-thread#1}/count/cumulative,1,0.0025453[s],33
/threads{locality#0/worker-thread#2}/count/cumulative,1,0.0025683[s],29
/threads{locality#0/worker-thread#3}/count/cumulative,1,0.0025904[s],33

The command --hpx:print-counter-format takes values csv and csv-short to generate CSV formatted counter values with header

With format as csv:

hello_world \
    --hpx:threads=2 \
    --hpx:print-counter-format csv \
    --hpx:print-counter /threads{locality#*/total}/count/cumulative \
    --hpx:print-counter /threads{locality#*/total}/count/cumulative-phases

will print the values of performance counters in CSV format with full countername as header

hello world from OS-thread 1 on locality 0
hello world from OS-thread 0 on locality 0
/threads{locality#*/total}/count/cumulative,/threads{locality#*/total}/count/cumulative-phases
39,93

With format csv-short:

hello_world \
    --hpx:threads 2 \
    --hpx:print-counter-format csv-short \
    --hpx:print-counter cumulative,/threads{locality#*/total}/count/cumulative \
    --hpx:print-counter phases,/threads{locality#*/total}/count/cumulative-phases

will print the values of performance counters in CSV format with short countername as header

hello world from OS-thread 1 on locality 0
hello world from OS-thread 0 on locality 0
cumulative,phases
39,93

With format csv and csv-short when used with --hpx:print-counter-interval:

hello_world \
    --hpx:threads 2 \
    --hpx:print-counter-format csv-short \
    --hpx:print-counter cumulative,/threads{locality#*/total}/count/cumulative \
    --hpx:print-counter phases,/threads{locality#*/total}/count/cumulative-phases \
    --hpx:print-counter-interval 5

will print the header only once repeating the performance counter value(s) repeatidly

cum,phases
25,42
hello world from OS-thread 1 on locality 0
hello world from OS-thread 0 on locality 0
44,95

The command --hpx:no-csv-header to be used with --hpx:print-counter-format to print performance counter values in CSV format without any header

hello_world \
--hpx:threads 2 \
--hpx:print-counter-format csv-short \
--hpx:print-counter cumulative,/threads{locality#*/total}/count/cumulative \
--hpx:print-counter phases,/threads{locality#*/total}/count/cumulative-phases \
--hpx:no-csv-header

will print

hello world from OS-thread 1 on locality 0
hello world from OS-thread 0 on locality 0
37,91

HPX provides an API allowing to discover performance counters and to retrieve the current value of any existing performance counter from any application.

Discover Existing Performance Counters
Retrieve the Current Value of any Performance Counter

Performance counters are specialized HPX components. In order to retrieve a counter value, the performance counter needs to be instantiated. HPX exposes a client component object for this purpose:

hpx::performance_counters::performance_counter counter(std::string const& name);

Instantiating an instance of this type will create the performance counter identified by the given name. Only the first invocation for any given counter name will create a new instance of that counter, all following invocations for a given counter name will reference the initially created instance. This ensures, that at any point in time there is always not more than one active instance of any of the existing performance counters

In order to access the counter value (or invoking any of the other functionality related to a performance counter, like start, stop, or reset) member functions of the created client component instance should be called:

// print the current number of threads created on locality 0
hpx::performance_counters::performance_counter count(
    "/threads{locality#0/total}/count/cumulative");
hpx::cout << count.get_value<int>() << hpx::endl;

For more information about the client component type see hpx::performance_counters::performance_counter.

HPX offers several ways by which you may provide your own data as a performance counter. This has the benefit of exposing additional, possibly application specific information using the existing Performance Counter framework, unifying the process of gathering data about your application.

An application that wants to provide counter data can implement a Performance Counter to provide the data. When a consumer queries performance data, the HPX runtime system calls the provider to collect the data. The runtime system uses an internal registry to determine which provider to call.

Generally, there two ways of exposing yur own Performance Counter data: a simple, function based way and a more complex, but more powerful way of implementing a full Performance Counter. Both alternatives are described in the following sections.

The simplest way to expose arbitrary numeric data is to write a function which will then be called whenever a consumer queries this counter. Currently, this type of Performance Counter can only be used to expose integer values. The expected signature of this function is:

boost::int64_t some_performance_data(bool reset);

The argument bool reset (which is supplied by the runtime system when the function is invoked) specifies whether the counter value should be reset after evaluating the current value (if applicable).

For instance, here is such a function returning how often it was invoked:

// The atomic variable 'counter' ensures the thread safety of the counter.
boost::atomic<boost::int64_t> counter(0);

boost::int64_t some_performance_data(bool reset)
{
    boost::int64_t result = ++counter;
    if (reset)
        counter = 0;
    return result;
}

This example function exposes a linearly increasing value as our performance data. The value is incrememnted on each invocation, e.g. each time a consumer requests the counter data of this Performance Counter.

The next step in exposing this counter to the runtime system is to register the function as a new raw counter type using the HPX API function hpx::performance_counters::install_counter_type. A counter type represents certain common characteristics of counters, like their counter type name, and any associated description information. The following snippet shows an example of how to register the function some_performance_data which is shown above for a counter type named "/test/data". This registration has to be executed before any consumer instantiates and queries an instance of this counter type.

#include <hpx/include/performance_counters.hpp>

void register_counter_type()
{
    // Call the HPX API function to register the counter type.
    hpx::performance_counters::install_counter_type(
        "/test/data",                                   // counter type name
        &some_performance_data,                         // function providing counter data
        "returns a linearly increasing counter value"   // description text (optional)
        ""                                              // unit of measure (optional)
    );
}

Now it is possible to instantiate a new counter instance based on the naming scheme "/test{locality#*/total}/data", where '*' is a zero based integer index indentifying the locality for which the counter instance should be accessed. The function install_counter_type enables to instantiate exactly one counter instance for each locality. Repeated requests to instantiate such a counter will return the same instance, e.g. the instance created for the first request.

If this counter needs to be accessed using the standard HPX command line options, the registration has to be performed during application startup, before hpx_main is executed. The best way to achieve this is to register an HPX startup function using the API function hpx::register_startup_function before calling hpx::init to initialize the runtime system:

int main(int argc, char* argv[])
{
    // By registering the counter type we make it available to any consumer
    // who creates and queries an instance of the type "/test/data".
    //
    // This registration should be performed during startup. The
    // function 'register_counter_type' should be executed as an HPX thread right
    // before hpx_main is executed.
    hpx::register_startup_function(&register_counter_type);

    // Initialize and run HPX.
    return hpx::init(argc, argv);
}

Please see the code in simplest_performance_counter.cpp for a full example demonstrating this functionality.

Somtimes, the simple way of exposing a single value as a Performance Counter is not sufficient. For that reason, HPX provides a means of implementing full Performance Counters which support:

  • Retrieving the descriptive information about the Performance Counter
  • Retrieving the current counter value
  • Resetting the Performance Counter (value)
  • Starting the Performance Counter
  • Stopping the Performance Counter
  • Setting the (initial) value of the Performance Counter

Every full Performance Counter will implement a predefined interface:

[performance_counter_interface]

In order to implement a full Performance Counter you have to create an HPX component exposing this interface. To simplify this task, HPX provides a ready made base class which handles all the boiler plate of creating a component for you. The remainder of this section will explain the process of creating a full Performance Counter based on the Sine example which you can find in the directory examples/performance_counters/sine/.

The base class is defined in the header file hpx/performance_counters/base_performance_counter.hpp as:

[base_performance_counter_class]

The single template parameter is expected to receive the type of the derived class implementing the Performance Counter. In the Sine example this looks like:

class sine_counter
  : public hpx::performance_counters::base_performance_counter<sine_counter>

i.e. the type sine_counter is derived from the base class passing the type as a template argument (please see sine.hpp for the full source code of the counter definition). For more information about this technique (called Curiously Recurring Template Pattern - CRTP), please see for instance the corresponding Wikipedia article. This base class itself is derived from the performance_counter interface described above.

Additionally, a full Performance Counter implementation not only exposes the actual value but also provides information about

  • The point in time a particular value was retrieved
  • A (sequential) invocation count
  • The actual counter value
  • An optional scaling coefficient
  • Information about the counter status

The HPX runtime system exposes a wide variety of predefined Performance Counters. These counters expose critical information about different modules of the runtime system. They can help determine system bottlenecks and fine-tune system and application performance.

Table 27. AGAS Performance Counters

Counter Type

Counter Instance Formatting

Parameters

Description

/agas/count/<agas_service>

where:
<agas_service> is one of the following:
primary namespace services: route, bind_gid, resolve_gid, unbind_gid, increment_credit, decrement_credit, allocate, begin_migration, end_migration

component namespace services: bind_prefix, bind_name, resolve_id, unbind_name, iterate_types, get_component_typename, num_localities_type

locality namespace services: free, localities, num_localities, num_threads, resolve_locality, resolved_localities

symbol namespace services: bind, resolve, unbind, iterate_names, on_symbol_namespace_event

<agas_instance>/total

where:
<agas_instance> is the name of the AGAS service to query. Currently, this value will be locality#0, where 0 is the root locality (the id of the locality hosting the AGAS service).

The value for * can be any locality id for the following <agas_service>: route, bind_gid, resolve_gid, unbind_gid, increment_credit, decrement_credit,bind, resolve, unbind, and iterate_names (only the primary and symbol AGAS service components live on all localities, whereas all other AGAS services are available on locality#0 only).

None

Returns the total number of invocations of the specified AGAS service since its creation.

/agas/<agas_service_category>/count

where:
<agas_service_category> is one of the following: primary, locality, component, or symbol.

<agas_instance>/total

where:
<agas_instance> is the name of the AGAS service to query. Currently, this value will be locality#0, where 0 is the root locality (the id of the locality hosting the AGAS service). Except for <agas_service_category> primary or symbol for which the value for * can be any locality id (only the primary and symbol AGAS service components live on all localities, whereas all other AGAS services are available on locality#0 only).

None

Returns the overall total number of invocations of all AGAS services provided by the given AGAS service category since its creation.

/agas/time/<agas_service>

where:
<agas_service> is one of the following:
primary namespace services: route, bind_gid, resolve_gid, unbind_gid, increment_credit, decrement_credit, allocate, begin_migration, end_migration

component namespace services: bind_prefix, bind_name, resolve_id, unbind_name, iterate_types, get_component_typename, num_localities_type

locality namespace services: free, localities, num_localities, num_threads, resolve_locality, resolved_localities

symbol namespace services: bind, resolve, unbind, iterate_names, on_symbol_namespace_event

<agas_instance>/total

where:
<agas_instance> is the name of the AGAS service to query. Currently, this value will be locality#0, where 0 is the root locality (the id of the locality hosting the AGAS service).

The value for * can be any locality id for the following <agas_service>: route, bind_gid, resolve_gid, unbind_gid, increment_credit, decrement_credit,bind, resolve, unbind, and iterate_names (only the primary and symbol AGAS service components live on all localities, whereas all other AGAS services are available on locality#0 only).

None

Returns the overall execution time of the specified AGAS service since its creation (in nanoseconds).

/agas/<agas_service_category>/time

where:
<agas_service_category> is one of the following: primary, locality, component, or symbol.

<agas_instance>/total

where:
<agas_instance> is the name of the AGAS service to query. Currently, this value will be locality#0, where 0 is the root locality (the id of the locality hosting the AGAS service). Except for <agas_service_category> primary or symbol for which the value for * can be any locality id (only the primary and symbol AGAS service components live on all localities, whereas all other AGAS services are available on locality#0 only).

None

Returns the overall execution time of all AGAS services provided by the given AGAS service category since its creation (in nanoseconds).

/agas/count/entries

locality#*/total

where:
* is the locality id of the locality the AGAS cache should be queried. The locality id is a (zero based) number identifying the locality.

None

Returns the number of cache entries resident in the AGAS cache of the specified locality (see <cache_statistics>.

/agas/count/<cache_statistics>

where:
<cache_statistics> is one of the following: cache/evictions, cache/hits, cache/inserts, cache/misses

locality#*/total

where:
* is the locality id of the locality the AGAS cache should be queried. The locality id is a (zero based) number identifying the locality.

None

Returns the number of cache events (evictions, hits, inserts, and misses) in the AGAS cache of the specified locality (see <cache_statistics>.

/agas/count/<full_cache_statistics>

where:
<full_cache_statistics> is one of the following: cache/get_entry, cache/insert_entry, cache/update_entry, cache/erase_entry

locality#*/total

where:
* is the locality id of the locality the AGAS cache should be queried. The locality id is a (zero based) number identifying the locality.

None

Returns the number of invocations of the specified cache API function of the AGAS cache.

/agas/time/<full_cache_statistics>

where:
<full_cache_statistics> is one of the following: cache/get_entry, cache/insert_entry, cache/update_entry, cache/erase_entry

locality#*/total

where:
* is the locality id of the locality the AGAS cache should be queried. The locality id is a (zero based) number identifying the locality.

None

Returns the the overall time spent executing of the specified API function of the AGAS cache.


Table 28. Parcel Layer Performance Counters

Counter Type

Counter Instance Formatting

Parameters

Description

/data/count/<connection_type>/<operation>

where:
<operation> is one of the following: sent, received
<connection_type> is one of the following: tcp, ipc, ibverbs, mpi

locality#*/total

where:
* is the locality id of the locality the overall number of transmitted bytes should be queried for. The locality id is a (zero based) number identifying the locality.

None

Returns the overall number of raw (uncompressed) bytes sent or received (see <operation>, e.g. sent or received) for the specified <connection_type>.

The performance counters for the connection type ipc are available only if the compile time constant HPX_HAVE_PARCELPORT_IPC was defined while compiling the HPX core library (which is not defined by default, the corresponding cmake configuration constant is HPX_WITH_PARCELPORT_IPC).

The performance counters for the connection type ibverbs are available only if the compile time constant HPX_HAVE_PARCELPORT_IBVERBS was defined while compiling the HPX core library (which is not defined by default, the corresponding cmake configuration constant is HPX_WITH_PARCELPORT_IBVERBS).

The performance counters for the connection type mpi are available only if the compile time constant HPX_HAVE_PARCELPORT_MPI was defined while compiling the HPX core library (which is not defined by default, the corresponding cmake configuration constant is HPX_WITH_PARCELPORT_MPI).

Please see CMake Variables used to configure HPX for more details.

/data/time/<connection_type>/<operation>

where:
<operation> is one of the following: sent, received
<connection_type> is one of the following: tcp, ipc, ibverbs, mpi

locality#*/total

where:
* is the locality id of the locality the total transmission time should be queried for. The locality id is a (zero based) number identifying the locality.

None

Returns the total time (in nanoseconds) between the start of each asynchronous transmission operation and the end of the corresponding operation for the specified <connection_type> the given locality (see <operation>, e.g. sent or received).

The performance counters for the connection type ipc are available only if the compile time constant HPX_HAVE_PARCELPORT_IPC was defined while compiling the HPX core library (which is not defined by default, the corresponding cmake configuration constant is HPX_WITH_PARCELPORT_IPC).

The performance counters for the connection type ibverbs are available only if the compile time constant HPX_HAVE_PARCELPORT_IBVERBS was defined while compiling the HPX core library (which is not defined by default, the corresponding cmake configuration constant is HPX_WITH_PARCELPORT_IBVERBS).

The performance counters for the connection type mpi are available only if the compile time constant HPX_HAVE_PARCELPORT_MPI was defined while compiling the HPX core library (which is not defined by default, the corresponding cmake configuration constant is HPX_WITH_PARCELPORT_MPI).

Please see CMake Variables used to configure HPX for more details.

/serialize/count/<connection_type>/<operation>

where:
<operation> is one of the following: sent, received
<connection_type> is one of the following: tcp, ipc, ibverbs, mpi

locality#*/total

where:
* is the locality id of the locality the overall number of transmitted bytes should be queried for. The locality id is a (zero based) number identifying the locality.

None

Returns the overall number of bytes transferred (see <operation>, e.g. sent or received, possibly compressed) for the specified <connection_type> by the given locality.

The performance counters for the connection type ipc are available only if the compile time constant HPX_HAVE_PARCELPORT_IPC was defined while compiling the HPX core library (which is not defined by default, the corresponding cmake configuration constant is HPX_WITH_PARCELPORT_IPC).

The performance counters for the connection type ibverbs are available only if the compile time constant HPX_HAVE_PARCELPORT_IBVERBS was defined while compiling the HPX core library (which is not defined by default, the corresponding cmake configuration constant is HPX_WITH_PARCELPORT_IBVERBS).

The performance counters for the connection type mpi are available only if the compile time constant HPX_HAVE_PARCELPORT_MPI was defined while compiling the HPX core library (which is not defined by default, the corresponding cmake configuration constant is HPX_WITH_PARCELPORT_MPI).

Please see CMake Variables used to configure HPX for more details.

/serialize/time/<connection_type>/<operation>

where:
<operation> is one of the following: sent, received
<connection_type> is one of the following: tcp, ipc, ibverbs, mpi

locality#*/total

where:
* is the locality id of the locality the serialization time should be queried for. The locality id is a (zero based) number identifying the locality.

None

Returns the overall time spent performing outgoing data serialization for the specified <connection_type>on the given locality (see <operation>, e.g. sent or received).

The performance counters for the connection type ipc are available only if the compile time constant HPX_HAVE_PARCELPORT_IPC was defined while compiling the HPX core library (which is not defined by default, the corresponding cmake configuration constant is HPX_WITH_PARCELPORT_IPC).

The performance counters for the connection type ibverbs are available only if the compile time constant HPX_HAVE_PARCELPORT_IBVERBS was defined while compiling the HPX core library (which is not defined by default, the corresponding cmake configuration constant is HPX_WITH_PARCELPORT_IBVERBS).

The performance counters for the connection type mpi are available only if the compile time constant HPX_HAVE_PARCELPORT_MPI was defined while compiling the HPX core library (which is not defined by default, the corresponding cmake configuration constant is HPX_WITH_PARCELPORT_MPI).

Please see CMake Variables used to configure HPX for more details.

/security/time/<connection_type>/<operation>

where:
<operation> is one of the following: sent, received
<connection_type> is one of the following: tcp, ipc, ibverbs, mpi

locality#*/total

where:
* is the locality id of the locality the time spent for security related operation should be queried for. The locality id is a (zero based) number identifying the locality.

None

Returns the overall time spent performing outgoing security operations for the specified <connection_type>on the given locality (see <operation>, e.g. sent or received).

The performance counters for the connection type ipc are available only if the compile time constant HPX_HAVE_PARCELPORT_IPC was defined while compiling the HPX core library (which is not defined by default, the corresponding cmake configuration constant is HPX_WITH_PARCELPORT_IPC).

The performance counters for the connection type ibverbs are available only if the compile time constant HPX_HAVE_PARCELPORT_IBVERBS was defined while compiling the HPX core library (which is not defined by default, the corresponding cmake configuration constant is HPX_WITH_PARCELPORT_IBVERBS).

The performance counters for the connection type mpi are available only if the compile time constant HPX_HAVE_PARCELPORT_MPI was defined while compiling the HPX core library (which is not defined by default, the corresponding cmake configuration constant is HPX_WITH_PARCELPORT_MPI).

These performance counters are available only if the compile time constant HPX_HAVE_SECURITY was defined while compiling the HPX core library (which is not defined by default, the corresponding cmake configuration constant is HPX_WITH_SECURITY).

Please see CMake Variables used to configure HPX for more details.

/parcels/count/routed

locality#*/total

where:
* is the locality id of the locality the number of routed parcels should be queried for. The locality id is a (zero based) number identifying the locality.

None

Returns the overall number of routed (outbound) parcels transferred by the given locality.

Routed parcels are those which cannot directly be delivered to its destination as the local AGAS is not able to resolve the destination address. In this case a parcel is sent to the AGAS service component which is responsible for creating the destination GID (and is responsible for resolving the destination address). This AGAS service component will deliver the parcel to its final target.

/parcels/count/<connection_type>/<operation>

where:
<operation> is one of the following: sent, received
<connection_type> is one of the following: tcp, ipc, ibverbs, mpi

locality#*/total

where:
* is the locality id of the locality the number of parcels should be queried for. The locality id is a (zero based) number identifying the locality.

None

Returns the overall number of parcels transferred using the specified <connection_type> by the given locality (see <operation>, e.g. sent or received).

The performance counters for the connection type ipc are available only if the compile time constant HPX_HAVE_PARCELPORT_IPC was defined while compiling the HPX core library (which is not defined by default, the corresponding cmake configuration constant is HPX_WITH_PARCELPORT_IPC).

The performance counters for the connection type ibverbs are available only if the compile time constant HPX_HAVE_PARCELPORT_IBVERBS was defined while compiling the HPX core library (which is not defined by default, the corresponding cmake configuration constant is HPX_WITH_PARCELPORT_IBVERBS).

The performance counters for the connection type mpi are available only if the compile time constant HPX_HAVE_PARCELPORT_MPI was defined while compiling the HPX core library (which is not defined by default, the corresponding cmake configuration constant is HPX_WITH_PARCELPORT_MPI).

Please see CMake Variables used to configure HPX for more details.

/messages/count/<connection_type>/<operation>

where:
<operation> is one of the following: sent, received
<connection_type> is one of the following: tcp, ipc, ibverbs, mpi

locality#*/total

where:
* is the locality id of the locality the number of messages should be queried for. The locality id is a (zero based) number identifying the locality.

None

Returns the overall number of messages [a] transferred using the specified <connection_type> by the given locality (see <operation>, e.g. sent or received).

The performance counters for the connection type ipc are available only if the compile time constant HPX_HAVE_PARCELPORT_IPC was defined while compiling the HPX core library (which is not defined by default, the corresponding cmake configuration constant is HPX_WITH_PARCELPORT_IPC).

The performance counters for the connection type ibverbs are available only if the compile time constant HPX_HAVE_PARCELPORT_IBVERBS was defined while compiling the HPX core library (which is not defined by default, the corresponding cmake configuration constant is HPX_WITH_PARCELPORT_IBVERBS).

The performance counters for the connection type mpi are available only if the compile time constant HPX_HAVE_PARCELPORT_MPI was defined while compiling the HPX core library (which is not defined by default, the corresponding cmake configuration constant is HPX_WITH_PARCELPORT_MPI).

Please see CMake Variables used to configure HPX for more details.

/parcelport/count/<connection_type>/<cache_statistics>

where:
<cache_statistics> is one of the following: cache/insertions, cache/evictions, cache/hits, cache/misses, cache/misses
<connection_type> is one of the following: tcp, ipc, ibverbs, mpi

locality#*/total

where:
* is the locality id of the locality the number of messages should be queried for. The locality id is a (zero based) number identifying the locality.

None

Returns the overall number cache events (evictions, hits, inserts, misses, and reclaims) for the connection cache of the given connection type on the given locality (see <cache_statistics>, e.g. cache/insertions, cache/evictions, cache/hits, cache/misses, or cache/reclaims).

The performance counters for the connection type ipc are available only if the compile time constant HPX_HAVE_PARCELPORT_IPC was defined while compiling the HPX core library (which is not defined by default, the corresponding cmake configuration constant is HPX_WITH_PARCELPORT_IPC).

The performance counters for the connection type ibverbs are available only if the compile time constant HPX_HAVE_PARCELPORT_IBVERBS was defined while compiling the HPX core library (which is not defined by default, the corresponding cmake configuration constant is HPX_WITH_PARCELPORT_IBVERBS).

The performance counters for the connection type mpi are available only if the compile time constant HPX_HAVE_PARCELPORT_MPI was defined while compiling the HPX core library (which is not defined by default, the corresponding cmake configuration constant is HPX_WITH_PARCELPORT_MPI).

Please see CMake Variables used to configure HPX for more details.

/parcelqueue/length/<operation>

where:
<operation> is one of the following: send, receive

locality#*/total

where:
* is the locality id of the locality the parcel queue should be queried. The locality id is a (zero based) number identifying the locality.

None

Returns the current number of parcels stored in the parcel queue (see <operation> for which queue to query, e.g. send or receive).

[a] A message can potentially consist of more than one parcel.


Table 29. Thread Manager Performance Counters

Counter Type

Counter Instance Formatting

Parameters

Description

/threads/count/cumulative

locality#*/total or
locality#*/worker-thread#*

where:
locality#* is defining the locality for which the overall number of retired HPX-threads should be queried for. The locality id (given by *) is a (zero based) number identifying the locality.

worker-thread#* is defining the worker thread for which the overall number of retired HPX-threads should be queried for. The worker thread number (given by the *) is a (zero based) number identifying the worker thread. The number of available worker threads is usually specified on the command line for the application using the option --hpx:threads.

None

Returns the overall number of executed (retired) HPX-threads on the given locality since application start. If the instance name is total the counter returns the accumulated number of retired HPX-threads for all worker threads (cores) on that locality. If the instance name is worker-thread#* the counter will return the overall number of retired HPX-threads for all worker threads separately. This counter is available only if the configuration time constant HPX_WITH_THREAD_CUMULATIVE_COUNTS is set to ON (default: ON).

/threads/time/average

locality#*/total or
locality#*/worker-thread#*

where:
locality#* is defining the locality for which the average time spent executing one HPX-thread should be queried for. The locality id (given by *) is a (zero based) number identifying the locality.

worker-thread#* is defining the worker thread for which the average time spent executing one HPX-thread should be queried for. The worker thread number (given by the *) is a (zero based) number identifying the worker thread. The number of available worker threads is usually specified on the command line for the application using the option --hpx:threads.

None

Returns the average time spent executing one HPX-thread on the given locality since application start. If the instance name is total the counter returns the average time spent executing one HPX-thread for all worker threads (cores) on that locality. If the instance name is worker-thread#* the counter will return the average time spent executing one HPX-thread for all worker threads separately. This counter is available only if the configuration time constants HPX_WITH_THREAD_CUMULATIVE_COUNTS (default: ON) and HPX_WITH_THREAD_IDLE_RATES are set to ON (default: OFF).

/threads/time/average-overhead

locality#*/total or
locality#*/worker-thread#*

where:
locality#* is defining the locality for which the average overhead spent executing one HPX-thread should be queried for. The locality id (given by *) is a (zero based) number identifying the locality.

worker-thread#* is defining the worker thread for which the average overhead spent executing one HPX-thread should be queried for. The worker thread number (given by the *) is a (zero based) number identifying the worker thread. The number of available worker threads is usually specified on the command line for the application using the option --hpx:threads.

None

Returns the average time spent on overhead while executing one HPX-thread on the given locality since application start. If the instance name is total the counter returns the average time spent on overhead while executing one HPX-thread for all worker threads (cores) on that locality. If the instance name is worker-thread#* the counter will return the average time spent on overhead executing one HPX-thread for all worker threads separately. This counter is available only if the configuration time constants HPX_WITH_THREAD_CUMULATIVE_COUNTS (default: ON) and HPX_WITH_THREAD_IDLE_RATES are set to ON (default: OFF).

/threads/count/cumulative-phases

locality#*/total or
locality#*/worker-thread#*

where:
locality#* is defining the locality for which the overall number of executed HPX-thread phases (invocations) should be queried for. The locality id (given by *) is a (zero based) number identifying the locality.

worker-thread#* is defining the worker thread for which the overall number of executed HPX-thread phases (invocations) should be queried for. The worker thread number (given by the *) is a (zero based) number identifying the worker thread. The number of available worker threads is usually specified on the command line for the application using the option --hpx:threads.

None

Returns the overall number of executed HPX-thread phases (invocations) on the given locality since application start. If the instance name is total the counter returns the accumulated number of executed HPX-thread phases (invocations) for all worker threads (cores) on that locality. If the instance name is worker-thread#* the counter will return the overall number of executed HPX-thread phases for all worker threads separately. This counter is available only if the configuration time constant HPX_WITH_THREAD_CUMULATIVE_COUNTS is set to ON (default: ON).

/threads/time/average-phase

locality#*/total or
locality#*/worker-thread#*

where:
locality#* is defining the locality for which the average time spent executing one HPX-thread phase (invocation) should be queried for. The locality id (given by *) is a (zero based) number identifying the locality.

worker-thread#* is defining the worker thread for which the average time executing one HPX-thread phase (invocation) should be queried for. The worker thread number (given by the *) is a (zero based) number identifying the worker thread. The number of available worker threads is usually specified on the command line for the application using the option --hpx:threads.

None

Returns the average time spent executing one HPX-thread phase (invocation) on the given locality since application start. If the instance name is total the counter returns the average time spent executing one HPX-thread phase (invocation) for all worker threads (cores) on that locality. If the instance name is worker-thread#* the counter will return the average time spent executing one HPX-thread phase for all worker threads separately. This counter is available only if the configuration time constants HPX_WITH_THREAD_CUMULATIVE_COUNTS (default: ON) and HPX_WITH_THREAD_IDLE_RATES are set to ON (default: OFF).

/threads/time/average-phase-overhead

locality#*/total or
locality#*/worker-thread#*

where:
locality#* is defining the locality for which the average time overhead executing one HPX-thread phase (invocation) should be queried for. The locality id (given by *) is a (zero based) number identifying the locality.

worker-thread#* is defining the worker thread for which the average overhead executing one HPX-thread phase (invocation) should be queried for. The worker thread number (given by the *) is a (zero based) number identifying the worker thread. The number of available worker threads is usually specified on the command line for the application using the option --hpx:threads.

None

Returns the average time spent on overhead executing one HPX-thread phase (invocation) on the given locality since application start. If the instance name is total the counter returns the average time spent on overhead while executing one HPX-thread phase (invocation) for all worker threads (cores) on that locality. If the instance name is worker-thread#* the counter will return the average time spent on overhead executing one HPX-thread phase for all worker threads separately. This counter is available only if the configuration time constants HPX_WITH_THREAD_CUMULATIVE_COUNTS (default: ON) and HPX_WITH_THREAD_IDLE_RATES are set to ON (default: OFF).

/threads/time/overall

locality#*/total or
locality#*/worker-thread#*

where:
locality#* is defining the locality for which the overall time spent running the scheduler should be queried for. The locality id (given by *) is a (zero based) number identifying the locality.

worker-thread#* is defining the worker thread for which the overall time spent running the scheduler should be queried for. The worker thread number (given by the *) is a (zero based) number identifying the worker thread. The number of available worker threads is usually specified on the command line for the application using the option --hpx:threads.

None

Returns the overall time spent running the scheduler on the given locality since application start. If the instance name is total the counter returns the overall time spent running the scheduler for all worker threads (cores) on that locality. If the instance name is worker-thread#* the counter will return the overall time spent running the scheduler for all worker threads separately. This counter is available only if the configuration time constant HPX_WITH_THREAD_IDLE_RATES is set to ON (default: OFF).

/threads/time/cumulative

locality#*/total or
locality#*/worker-thread#*

where:
locality#* is defining the locality for which the overall time spent executing all HPX-threads should be queried for. The locality id (given by *) is a (zero based) number identifying the locality.

worker-thread#* is defining the worker thread for which the overall time spent executing all HPX-threads should be queried for. The worker thread number (given by the *) is a (zero based) number identifying the worker thread. The number of available worker threads is usually specified on the command line for the application using the option --hpx:threads.

None

Returns the overall time spent executing all HPX-threads on the given locality since application start. If the instance name is total the counter returns the overall time spent executing all HPX-threads for all worker threads (cores) on that locality. If the instance name is worker-thread#* the counter will return the overall time spent executing all HPX-threads for all worker threads separately. This counter is available only if the configuration time constants HPX_THREAD_MAINTAIN_CUMULATIVE_COUNTS (default: ON) and HPX_THREAD_MAINTAIN_IDLE_RATES are set to ON (default: OFF).

/threads/time/cumulative-overheads

locality#*/total or
locality#*/worker-thread#*

where:
locality#* is defining the locality for which the overall overhead time incurred by executing all HPX-threads should be queried for. The locality id (given by *) is a (zero based) number identifying the locality.

worker-thread#* is defining the worker thread for which the the overall overhead time incurred by executing all HPX-threads should be queried for. The worker thread number (given by the *) is a (zero based) number identifying the worker thread. The number of available worker threads is usually specified on the command line for the application using the option --hpx:threads.

None

Returns the overall overhead time incurred executing all HPX-threads on the given locality since application start. If the instance name is total the counter returns the overall overhead time incurred executing all HPX-threads for all worker threads (cores) on that locality. If the instance name is worker-thread#* the counter will return the overall overhead time incurred executing all HPX-threads for all worker threads separately. This counter is available only if the configuration time constants HPX_THREAD_MAINTAIN_CUMULATIVE_COUNTS (default: ON) and HPX_THREAD_MAINTAIN_IDLE_RATES are set to ON (default: OFF).

/threads/count/instantaneous/<thread-state>

where:
<thread-state> is one of the following: all, active, pending, suspended, terminated, staged

locality#*/total or
locality#*/worker-thread#*

where:
locality#* is defining the locality for which the current number of threads with the given state should be queried for. The locality id (given by *) is a (zero based) number identifying the locality.

worker-thread#* is defining the worker thread for which the current number of threads with the given state should be queried for. The worker thread number (given by the *) is a (zero based) number identifying the worker thread. The number of available worker threads is usually specified on the command line for the application using the option --hpx:threads.

The staged thread state refers to registered tasks before they are converted to thread objects.

None

Returns the current number of HPX-threads having the given thread state on the given locality. If the instance name is total the counter returns the current number of HPX-threads of the given state for all worker threads (cores) on that locality. If the instance name is worker-thread#* the counter will return the current number of HPX-threads in the given state for all worker threads separately.

/threads/wait-time/<thread-state>

where:
<thread-state> is one of the following: pending, staged

locality#*/total or
locality#*/worker-thread#*

where:
locality#* is defining the locality for which the average wait time of HPX-threads (pending) or thread descriptions (staged) with the given state should be queried for. The locality id (given by *) is a (zero based) number identifying the locality.

worker-thread#* is defining the worker thread for which the average wait time for the given state should be queried for. The worker thread number (given by the *) is a (zero based) number identifying the worker thread. The number of available worker threads is usually specified on the command line for the application using the option --hpx:threads.

The staged thread state refers to the wait time of registered tasks before they are converted into thread objects, while the pending thread state refers to the wait time of threads in any of the scheduling queues.

None

Returns the average wait time of HPX-threads (if the thread state is pending) or of task descriptions (if the thread state is staged) on the given locality since application start. If the instance name is total the counter returns the wait time of HPX-threads of the given state for all worker threads (cores) on that locality. If the instance name is worker-thread#* the counter will return the wait time of HPX-threads in the given state for all worker threads separately.

These counters are available only if the compile time constant HPX_WITH_THREAD_QUEUE_WAITTIME was defined while compiling the HPX core library (default: OFF).

/threads/idle-rate

locality#*/total or
locality#*/worker-thread#*

where:
locality#* is defining the locality for which the average idle rate of all (or one) worker threads should be queried for. The locality id (given by *) is a (zero based) number identifying the locality

worker-thread#* is defining the worker thread for which the averaged idle rate should be queried for. The worker thread number (given by the *) is a (zero based) number identifying the worker thread. The number of available worker threads is usually specified on the command line for the application using the option --hpx:threads.

None

Returns the average idle rate for the given worker thread(s) on the given locality. The idle rate is defined as the ratio of the time spent on scheduling and management tasks and the overall time spent executing work since the application started. This counter is available only if the configuration time constant HPX_WITH_THREAD_IDLE_RATES is set to ON (default: OFF)/.

/threads/creation-idle-rate

locality#*/total or
locality#*/worker-thread#*

where:
locality#* is defining the locality for which the average creation idle rate of all (or one) worker threads should be queried for. The locality id (given by *) is a (zero based) number identifying the locality.

worker-thread#* is defining the worker thread for which the averaged idle rate should be queried for. The worker thread number (given by the *) is a (zero based) number identifying the worker thread. The number of available worker threads is usually specified on the command line for the application using the option --hpx:threads.

None

Returns the average idle rate for the given worker thread(s) on the given locality which is caused by creating new threads. The creation idle rate is defined as the ratio of the time spent on creating new threads and the overall time spent executing work since the application started. This counter is available only if the configuration time constants HPX_WITH_THREAD_IDLE_RATES (default: OFF) and HPX_WITH_THREAD_CREATION_AND_CLEANUP_RATES are set to ON.

/threads/cleanup-idle-rate

locality#*/total or
locality#*/worker-thread#*

where:
locality#* is defining the locality for which the average cleanup idle rate of all (or one) worker threads should be queried for. The locality id (given by *) is a (zero based) number identifying the locality.

worker-thread#* is defining the worker thread for which the averaged cleanup idle rate should be queried for. The worker thread number (given by the *) is a (zero based) number identifying the worker thread. The number of available worker threads is usually specified on the command line for the application using the option --hpx:threads.

None

Returns the average idle rate for the given worker thread(s) on the given locality which is caused by cleaning up terminated threads. The cleanup idle rate is defined as the ratio of the time spent on cleaning up terminated thread objects and the overall time spent executing work since the application started. This counter is available only if the configuration time constants HPX_WITH_THREAD_IDLE_RATES (default: OFF) and HPX_WITH_THREAD_CREATION_AND_CLEANUP_RATES are set to ON.

/threadqueue/length

locality#*/total or
locality#*/worker-thread#*

where:
locality#* is defining the locality for which the current length of all thread queues in the scheduler for all (or one) worker threads should be queried for. The locality id (given by *) is a (zero based) number identifying the locality.

worker-thread#* is defining the worker thread for which the current length of all thread queues in the scheduler should be queried for. The worker thread number (given by the *) is a (zero based) number identifying the worker thread. The number of available worker threads is usually specified on the command line for the application using the option --hpx:threads.

None

Returns the overall length of all queues for the given worker thread(s) on the given locality.

/threads/count/stack-unbinds

locality#*/total

where:
* is the locality id of the locality the unbind (madvise) operations should be queried for. The locality id is a (zero based) number identifying the locality.

None

Returns the total number of HPX-thread unbind (madvise) operations performed for the referenced locality. Note that this counter is not available on Windows based platforms.

/threads/count/stack-recycles

locality#*/total

where:
* is the locality id of the locality the recylcling operations should be queried for. The locality id is a (zero based) number identifying the locality.

None

Returns the total number of HPX-thread recycling operations performed.

/threads/count/stolen-from-pending

locality#*/total

where:
* is the locality id of the locality the number of 'stole' threads should be queried for. The locality id is a (zero based) number identifying the locality.

None

Returns the total number of HPX-threads 'stolen' from the pending thread queue by a neighboring thread worker thread (these threads are executed by a different worker thread than they were initially scheduled on). This counter is available only if the configuration time constant HPX_WITH_THREAD_STEALING_COUNTS is set to ON (default: ON).

/threads/count/pending-misses

locality#*/total or
locality#*/worker-thread#*

where:
locality#* is defining the locality for which the number of pending queue misses of all (or one) worker threads should be queried for. The locality id (given by *) is a (zero based) number identifying the locality

worker-thread#* is defining the worker thread for which the number of pending queue misses should be queried for. The worker thread number (given by the *) is a (zero based) number identifying the worker thread. The number of available worker threads is usually specified on the command line for the application using the option --hpx:threads.

None

Returns the total number of times that the referenced worker-thread on the referenced locality failed to find pending HPX-threads in its associated queue. This counter is available only if the configuration time constant HPX_WITH_THREAD_STEALING_COUNTS is set to ON (default: ON).

/threads/count/pending-accesses

locality#*/total or
locality#*/worker-thread#*

where:
locality#* is defining the locality for which the number of pending queue accesses of all (or one) worker threads should be queried for. The locality id (given by *) is a (zero based) number identifying the locality

worker-thread#* is defining the worker thread for which the number of pending queue accesses should be queried for. The worker thread number (given by the *) is a (zero based) number identifying the worker thread. The number of available worker threads is usually specified on the command line for the application using the option --hpx:threads.

None

Returns the total number of times that the referenced worker-thread on the referenced locality looked for pending HPX-threads in its associated queue. This counter is available only if the configuration time constant HPX_WITH_THREAD_STEALING_COUNTS is set to ON (default: ON).

/threads/count/stolen-from-staged

locality#*/total or
locality#*/worker-thread#*

where:
locality#* is defining the locality for which the number of HPX-threads stolen from the staged queue of all (or one) worker threads should be queried for. The locality id (given by *) is a (zero based) number identifying the locality.

worker-thread#* is defining the worker thread for which the number of HPX-threads stolen from the staged queue should be queried for. The worker thread number (given by the *) is a (zero based) number identifying the worker thread. The number of available worker threads is usually specified on the command line for the application using the option --hpx:threads.

None

Returns the total number of HPX-threads 'stolen' from the staged thread queue by a neighboring worker thread (these threads are executed by a different worker thread than they were initially scheduled on). This counter is available only if the configuration time constant HPX_WITH_THREAD_STEALING_COUNTS is set to ON (default: ON).

/threads/count/stolen-to-pending

locality#*/total or
locality#*/worker-thread#*

where:
locality#* is defining the locality for which the number of HPX-threads stolen to the pending queue of all (or one) worker threads should be queried for. The locality id (given by *) is a (zero based) number identifying the locality.

worker-thread#* is defining the worker thread for which the number of HPX-threads stolen to the pending queue should be queried for. The worker thread number (given by the *) is a (zero based) number identifying the worker thread. The number of available worker threads is usually specified on the command line for the application using the option --hpx:threads.

None

Returns the total number of HPX-threads 'stolen' to the pending thread queue of the worker thread (these threads are executed by a different worker thread than they were initially scheduled on). This counter is available only if the configuration time constant HPX_WITH_THREAD_STEALING_COUNTS is set to ON (default: ON).

/threads/count/stolen-to-staged

locality#*/total or
locality#*/worker-thread#*

where:
locality#* is defining the locality for which the number of HPX-threads stolen to the staged queue of all (or one) worker threads should be queried for. The locality id (given by *) is a (zero based) number identifying the locality.

worker-thread#* is defining the worker thread for which the number of HPX-threads stolen to the staged queue should be queried for. The worker thread number (given by the *) is a (zero based) worker thread number (given by the *) is a (zero based) number identifying the worker thread. The number of available worker threads is usually specified on the command line for the application using the option --hpx:threads.

None

Returns the total number of HPX-threads 'stolen' to the staged thread queue of a neighboring worker thread (these threads are executed by a different worker thread than they were initially scheduled on). This counter is available only if the configuration time constant HPX_WITH_THREAD_STEALING_COUNTS is set to ON (default: ON).

/threads/count/objects

locality#*/total or
locality#*/allocator#*

where:
locality#* is defining the locality for which the current (cumulative) number of all created HPX-thread objects should be queried for. The locality id (given by *) is a (zero based) number identifying the locality.

allocator#* is defining the number of the allocator instance using which the threads have been created. HPX uses a varying number of allocators to create (and recycle) HPX-thread objects, most likely these counters are of use for debugging purposes only. The allocator id (given by *) is a (zero based) number identifying the allocator to query.

None

Returns the total number of HPX-thread objects created. Note that thread objects are reused to improve system performance, thus this number does not reflect the number of actually executed (retired) HPX-threads.


Table 30. General Performance Counters exposing Characteristics of Localities

Counter Type

Counter Instance Formatting

Parameters

Description

/runtime/count/component

locality#*/total

where:
* is the locality id of the locality the number of components should be queried. The locality id is a (zero based) number identifying the locality.

The type of the component. This is the string which has been used while registering the component with HPX, e.g. which has been passed as the second parameter to the macro HPX_REGISTER_COMPONENT.

Returns the overall number of currently active components of the specified type on the given locality.

/runtime/count/action_invocation

locality#*/total

where:
* is the locality id of the locality the number of action invocations should be queried. The locality id is a (zero based) number identifying the locality.

The action type. This is the string which has been used while registering the action with HPX, e.g. which has been passed as the second parameter to the macro HPX_REGISTER_ACTION or HPX_REGISTER_ACTION_ID.

Returns the overall (local) invocation count of the specified action type on the given locality.

/runtime/count/remote_action_invocation

locality#*/total

where:
* is the locality id of the locality the number of action invocations should be queried. The locality id is a (zero based) number identifying the locality.

The action type. This is the string which has been used while registering the action with HPX, e.g. which has been passed as the second parameter to the macro HPX_REGISTER_ACTION or HPX_REGISTER_ACTION_ID.

Returns the overall (remote) invocation count of the specified action type on the given locality.

/runtime/uptime

locality#*/total

where:
* is the locality id of the locality the system uptime should be queried. The locality id is a (zero based) number identifying the locality.

None

Returns the overall time since application start on the given locality in nanoseconds.

/runtime/memory/virtual

locality#*/total

where:
* is the locality id of the locality the allocated virtual memory should be queried. The locality id is a (zero based) number identifying the locality.

None

Returns the amount of virtual memory currently allocated by the referenced locality (in bytes).

/runtime/memory/resident

locality#*/total

where:
* is the locality id of the locality the the allocated resident memory should be queried. The locality id is a (zero based) number identifying the locality.

None

Returns the amount of resident memory currently allocated by the referenced locality (in bytes).

/runtime/io/read_bytes_issued

locality#*/total

where:
* is the locality id of the locality the number of bytes read should be queried. The locality id is a (zero based) number identifying the locality.

None

Returns the number of bytes read by the process (aggregate of count arguments passed to read() call or its analogues). This performance counter is available only on systems which expose the related data through the /proc file system.

/runtime/io/write_bytes_issued

locality#*/total

where:
* is the locality id of the locality the number of bytes written should be queried. The locality id is a (zero based) number identifying the locality.

None

Returns the number of bytes written by the process (aggregate of count arguments passed to write() call or its analogues). This performance counter is available only on systems which expose the related data through the /proc file system.

/runtime/io/read_syscalls

locality#*/total

where:
* is the locality id of the locality the number of system calls should be queried. The locality id is a (zero based) number identifying the locality.

None

Returns the number of system calls that perform I/O reads. This performance counter is available only on systems which expose the related data through the /proc file system.

/runtime/io/write_syscalls

locality#*/total

where:
* is the locality id of the locality the number of system calls should be queried. The locality id is a (zero based) number identifying the locality.

None

Returns the number of system calls that perform I/O writes. This performance counter is available only on systems which expose the related data through the /proc file system.

/runtime/io/read_bytes_transferred

locality#*/total

where:
* is the locality id of the locality the number of bytes transferred should be queried. The locality id is a (zero based) number identifying the locality.

None

Returns the number of bytes retrieved from storage by I/O operations. This performance counter is available only on systems which expose the related data through the /proc file system.

/runtime/io/write_bytes_transferred

locality#*/total

where:
* is the locality id of the locality the number of bytes transferred should be queried. The locality id is a (zero based) number identifying the locality.

None

Returns the number of bytes retrieved from storage by I/O operations. This performance counter is available only on systems which expose the related data through the /proc file system.

/runtime/io/write_bytes_cancelled

locality#*/total

where:
* is the locality id of the locality the number of bytes not being transferred should be queried. The locality id is a (zero based) number identifying the locality.

None

Returns the number of bytes accounted by write_bytes_transferred that has not been ultimately stored due to truncation or deletion. This performance counter is available only on systems which expose the related data through the /proc file system.


Table 31. Performance Counters for General Statistics

Counter Type

Counter Instance Formatting

Parameters

Description

/statistics/average

Any full performance counter name. The referenced performance counter is queried at fixed time intervals as specified by the first parameter.

Returns the current average (mean) value calculated based on the values queried from the underlying counter (the one specified as the instance name).

Any parameter will be interpreted as the time interval (in milliseconds) at which the underlying counter should be queried. If no value is specified, the counter will assume 1000 [ms] as the default.

/statistics/stddev

Any full performance counter name. The referenced performance counter is queried at fixed time intervals as specified by the first parameter.

Returns the current standard deviation (stddev) value calculated based on the values queried from the underlying counter (the one specified as the instance name).

Any parameter will be interpreted as the time interval (in milliseconds) at which the underlying counter should be queried. If no value is specified, the counter will assume 1000 [ms] as the default.

/statistics/rolling_average

Any full performance counter name. The referenced performance counter is queried at fixed time intervals as specified by the first parameter.

Returns the current rolling average (mean) value calculated based on the values queried from the underlying counter (the one specified as the instance name).

Any parameter will be interpreted as a list of two comma separated (integer) values, where the first is the time interval (in milliseconds) at which the underlying counter should be queried. If no value is specified, the counter will assume 1000 [ms] as the default. The second value will be interpreted as the size of the rolling window (the number of latest values to use to calculate the rolling average). The default value for this is 10.

/statistics/median

Any full performance counter name. The referenced performance counter is queried at fixed time intervals as specified by the first parameter.

Returns the current (statistically estimated) median value calculated based on the values queried from the underlying counter (the one specified as the instance name).

Any parameter will be interpreted as the time interval (in milliseconds) at which the underlying counter should be queried. If no value is specified, the counter will assume 1000 [ms] as the default.

/statistics/max

Any full performance counter name. The referenced performance counter is queried at fixed time intervals as specified by the first parameter.

Returns the current maximum value calculated based on the values queried from the underlying counter (the one specified as the instance name).

Any parameter will be interpreted as the time interval (in milliseconds) at which the underlying counter should be queried. If no value is specified, the counter will assume 1000 [ms] as the default.

/statistics/min

Any full performance counter name. The referenced performance counter is queried at fixed time intervals as specified by the first parameter.

Returns the current minimum value calculated based on the values queried from the underlying counter (the one specified as the instance name).

Any parameter will be interpreted as the time interval (in milliseconds) at which the underlying counter should be queried. If no value is specified, the counter will assume 1000 [ms] as the default.


Table 32. Performance Counters for Elementary Arithmetic Operations

Counter Type

Counter Instance Formatting

Parameters

Description

/arithmetics/add

None

Returns the sum calculated based on the values queried from the underlying counters (the ones specified as the parameters).

The parameter will be interpreted as a comma separated list of full performance counter names which are queried whenever this counter is accessed. Any wildcards in the counter names will be expanded.

/arithmetics/subtract

None

Returns the difference calculated based on the values queried from the underlying counters (the ones specified as the parameters).

The parameter will be interpreted as a comma separated list of full performance counter names which are queried whenever this counter is accessed. Any wildcards in the counter names will be expanded.

/arithmetics/multiply

None

Returns the product calculated based on the values queried from the underlying counters (the ones specified as the parameters).

The parameter will be interpreted as a comma separated list of full performance counter names which are queried whenever this counter is accessed. Any wildcards in the counter names will be expanded.

/arithmetics/divide

None

Returns the result of division of the values queried from the underlying counters (the ones specified as the parameters).

The parameter will be interpreted as a comma separated list of full performance counter names which are queried whenever this counter is accessed. Any wildcards in the counter names will be expanded.


[Note]Note

The /arithmetics counters can consume an arbitrary number of other counters. For this reason those have to be specified as parameters (a comma separated list of counters appended after a '@'). For instance:

./bin/hello_world -t2 \
    --hpx:print-counter=/threads{locality#0/worker-thread#*}/count/cumulative \
    --hpx:print-counter=/arithmetics/add@/threads{locality#0/worker-thread#*}/count/cumulative
hello world from OS-thread 0 on locality 0
hello world from OS-thread 1 on locality 0
/threads{locality#0/worker-thread#0}/count/cumulative,1,0.515640,[s],25
/threads{locality#0/worker-thread#1}/count/cumulative,1,0.515520,[s],36
/arithmetics/add@/threads{locality#0/worker-thread#*}/count/cumulative,1,0.516445,[s],64

Since all wildcards in the parameters are expanded, this example is fully equivalent to specifying both counters separately to /arithmetics/add:

./bin/hello_world -t2 \
    --hpx:print-counter=/threads{locality#0/worker-thread#*}/count/cumulative \
    --hpx:print-counter=/arithmetics/add@\
        /threads{locality#0/worker-thread#0}/count/cumulative,\
        /threads{locality#0/worker-thread#1}/count/cumulative

The HPX runtime has six thread scheduling policies: local-priority, local, abp-priority, hierarchy, static-priority, and periodic-priority. These policies can be specified from the command line using the command line option --hpx:queuing. In order to use a particular scheduling policy, the runtime system must be built with the appropriate scheduler flag turned on (e.g. cmake -DHPX_THREAD_SCHEDULERS=local, see CMake Variables used to configure HPX for more information).

Priority Local Scheduling Policy (default policy)

The priority local scheduling policy maintains one queue per operating system (OS) thread. The OS thread pulls its work from this queue. By default the number of high priority queues is equal to the number of OS threads; the number of high priority queues can be specified on the command line using --hpx:high-priority-threads. High priority threads are executed by any of the OS threads before any other work is executed. When a queue is empty work will be taken from high priority queues first. There is one low priority queue from which threads will be scheduled only when there is no other work.

For this scheduling policy there is an option to turn on NUMA sensitivity using the command line option --hpx:numa-sensitive. When NUMA sensitivity is turned on work stealing is done from queues associated with the same NUMA domain first, only after that work is stolen from other NUMA domains.

This scheduler is enabled at build time by default and will be available always.

Static Priority Scheduling Policy
  • invoke using: --hpx:queuing=static-priority (or -qs)
  • flag to turn on for build: HPX_THREAD_SCHEDULERS=all or HPX_THREAD_SCHEDULERS=static-priority

The static scheduling policy maintains one queue per OS thread from which each OS thread pulls its tasks (user threads). Threads are distributed in a round robin fashion. There is no thread stealing in this policy.

Local Scheduling Policy
  • invoke using: --hpx:queuing=local (or -ql)
  • flag to turn on for build: HPX_THREAD_SCHEDULERS=all or HPX_THREAD_SCHEDULERS=local

The local scheduling policy maintains one queue per OS thread from which each OS thread pulls its tasks (user threads).

Static Scheduling Policy
  • invoke using: --hpx:queuing=static
  • flag to turn on for build: HPX_THREAD_SCHEDULERS=all or HPX_THREAD_SCHEDULERS=static

The static scheduling policy maintains one queue per OS thread from which each OS thread pulls its tasks (user threads). Threads are distributed in a round robin fashion. There is no thread stealing in this policy.

Priority ABP Scheduling Policy
  • invoke using: --hpx:queuing=abp-priority
  • flag to turn on for build: HPX_THREAD_SCHEDULERS=all or HPX_THREAD_SCHEDULERS=abp-priority

Priority ABP policy maintains a double ended lock free queue for each OS thread. By default the number of high priority queues is equal to the number of OS threads; the number of high priority queues can be specified on the command line using --hpx:high-priority-threads. High priority threads are executed by the first OS threads before any other work is executed. When a queue is empty work will be taken from high priority queues first. There is one low priority queue from which threads will be scheduled only when there is no other work. For this scheduling policy there is an option to turn on NUMA sensitivity using the command line option --hpx:numa-sensitive. When NUMA sensitivity is turned on work stealing is done from queues associated with the same NUMA domain first, only after that work is stolen from other NUMA domains.

Hierarchy Scheduling Policy
  • invoke using: --hpx:queuing=hierarchy (or -qh)
  • flag to turn on for build: HPX_THREAD_SCHEDULERS=all or HPX_THREAD_SCHEDULERS=hierarchy

The hierarchy policy maintains a tree of work items. Every OS thread walks the tree to obtain new work. Arity of the thread queue tree can be specified on the command line using --hpx:hierarchy-arity (default is 2). Work stealing is done from the parent queue in that tree.

Periodic Priority Scheduling Policy

Maintains one queue of work items (user threads) for each OS thread. Maintains a number of high priority queues (specified by --hpx:high-priority-threads) and one low priority queue. High priority threads are executed by the specified number of OS threads before any other work is executed. Low priority threads are executed when no other work is available.

Index

A B C D E F G H I K L M N O P R S T U V W Y

A
B
C
D
E
F
G
H
I
K
L
M
N
O
P
R
S
T
U
V
W
Y

Reference

Header <hpx/components/component_storage/migrate_from_storage.hpp>
Function template migrate_from_storage
Header <hpx/components/component_storage/migrate_to_storage.hpp>
Function template migrate_to_storage
Function template migrate_to_storage
Header <hpx/error.hpp>
Type error — Possible error conditions.
Header <hpx/exception.hpp>
Class error_codeA hpx::error_code represents an arbitrary error condition.
Class exceptionA hpx::exception is the main exception type used by HPX to report errors.
Struct thread_interruptedA hpx::thread_interrupted is the exception type used by HPX to interrupt a running HPX thread.
Function diagnostic_information — Extract the diagnostic information embedded in the given exception and return a string holding a formatted message.
Function diagnostic_information — Extract the diagnostic information embedded in the given exception and return a string holding a formatted message.
Function get_error_what — Return the error message of the thrown exception.
Function get_error_what — Return the locality id where the exception was thrown.
Function get_error_locality_id — Return the locality id where the exception was thrown.
Function get_error_locality_id — Return the locality id where the exception was thrown.
Function get_error — Return the locality id where the exception was thrown.
Function get_error — Return the locality id where the exception was thrown.
Function get_error_host_name — Return the hostname of the locality where the exception was thrown.
Function get_error_host_name — Return the hostname of the locality where the exception was thrown.
Function get_error_process_id — Return the (operating system) process id of the locality where the exception was thrown.
Function get_error_process_id — Return the (operating system) process id of the locality where the exception was thrown.
Function get_error_env — Return the environment of the OS-process at the point the exception was thrown.
Function get_error_env — Return the environment of the OS-process at the point the exception was thrown.
Function get_error_function_name — Return the function name from which the exception was thrown.
Function get_error_function_name — Return the function name from which the exception was thrown.
Function get_error_backtrace — Return the stack backtrace from the point the exception was thrown.
Function get_error_backtrace — Return the stack backtrace from the point the exception was thrown.
Function get_error_file_name — Return the (source code) file name of the function from which the exception was thrown.
Function get_error_file_name — Return the (source code) file name of the function from which the exception was thrown.
Function get_error_line_number — Return the line number in the (source code) file of the function from which the exception was thrown.
Function get_error_line_number — Return the line number in the (source code) file of the function from which the exception was thrown.
Function get_error_os_thread — Return the sequence number of the OS-thread used to execute HPX-threads from which the exception was thrown.
Function get_error_os_thread — Return the sequence number of the OS-thread used to execute HPX-threads from which the exception was thrown.
Function get_error_thread_id — Return the unique thread id of the HPX-thread from which the exception was thrown.
Function get_error_thread_id — Return the unique thread id of the HPX-thread from which the exception was thrown.
Function get_error_thread_description — Return any additionally available thread description of the HPX-thread from which the exception was thrown.
Function get_error_thread_description — Return any additionally available thread description of the HPX-thread from which the exception was thrown.
Function get_error_config — Return the HPX configuration information point from which the exception was thrown.
Function get_error_config — Return the HPX configuration information point from which the exception was thrown.
Function get_error_state — Return the HPX runtime state information at which the exception was thrown.
Function get_error_state — Return the HPX runtime state information at which the exception was thrown.
Macro HPX_THROW_EXCEPTIONThrow a hpx::exception initialized from the given parameters.
Macro HPX_THROWS_IFEither throw a hpx::exception or initialize hpx::error_code from the given parameters.
Header <hpx/exception_fwd.hpp>
Global throwsPredefined error_code object used as "throw on error" tag.
Header <hpx/exception_list.hpp>
Class exception_list
Header <hpx/hpx_finalize.hpp>
Function finalize — Main function to gracefully terminate the the HPX runtime system.
Function finalize — Main function to gracefully terminate the the HPX runtime system.
Function terminate — Terminate any application non-gracefully.
Function disconnect — Disconnect this locality from the application.
Function disconnect — Disconnect this locality from the application.
Function stop — Stop the runtime system.
Header <hpx/hpx_fwd.hpp>
Type definition startup_function_type
Type definition shutdown_function_type
Function find_root_locality — Return the global id representing the root locality.
Function find_all_localities — Return the list of global ids representing all localities available to this application.
Function find_all_localities — Return the list of global ids representing all localities available to this application which support the given component type.
Function find_remote_localities — Return the list of locality ids of remote localities supporting the given component type. By default this function will return the list of all remote localities (all but the current locality).
Function find_remote_localities — Return the list of locality ids of remote localities supporting the given component type. By default this function will return the list of all remote localities (all but the current locality).
Function find_locality — Return the global id representing an arbitrary locality which supports the given component type.
Function get_num_localities_sync — Return the number of localities which are currently registered for the running application.
Function get_initial_num_localities — Return the number of localities which were registered at startup for the running application.
Function get_num_localities — Asynchronously return the number of localities which are currently registered for the running application.
Function get_num_localities_sync — Return the number of localities which are currently registered for the running application.
Function get_num_localities — Asynchronously return the number of localities which are currently registered for the running application.
Function register_pre_startup_function — Add a function to be executed by a HPX thread before hpx_main but guaranteed before any startup function is executed (system-wide).
Function register_startup_function — Add a function to be executed by a HPX thread before hpx_main but guaranteed after any pre-startup function is executed (system-wide).
Function register_pre_shutdown_function — Add a function to be executed by a HPX thread during hpx::finalize() but guaranteed before any shutdown function is executed (system-wide)
Function register_shutdown_function — Add a function to be executed by a HPX thread during hpx::finalize() but guaranteed after any pre-shutdown function is executed (system-wide)
Function is_starting — Test whether the runtime system is currently being started.
Function is_running — Test whether the runtime system is currently running.
Function is_stopped — Test whether the runtime system is currently stopped.
Function is_stopped_or_shutting_down — Test whether the runtime system is currently being shut down.
Function get_thread_name — Return the name of the calling thread.
Function get_num_worker_threads — Return the number of worker OS- threads used to execute HPX threads.
Function get_system_uptime — Return the system uptime measure on the thread executing this call.
Function get_colocation_id_sync — Return the id of the locality where the object referenced by the given id is currently located on.
Function get_colocation_id — Asynchronously return the id of the locality where the object referenced by the given id is currently located on.
Function start_active_counters — Start all active performance counters, optionally naming the section of code.
Function reset_active_counters — Resets all active performance counters.
Function stop_active_counters — Stop all active performance counters.
Function evaluate_active_counters — Evaluate and output all active performance counters, optionally naming the point in code marked by this function.
Function create_message_handler — Create an instance of a message handler plugin.
Function create_binary_filter — Create an instance of a binary filter plugin.
Header <hpx/hpx_init.hpp>
Function init — Main entry point for launching the HPX runtime system.
Function init — Main entry point for launching the HPX runtime system.
Function init — Main entry point for launching the HPX runtime system.
Function init — Main entry point for launching the HPX runtime system.
Function init — Main entry point for launching the HPX runtime system.
Function init — Main entry point for launching the HPX runtime system.
Function init — Main entry point for launching the HPX runtime system.
Function init — Main entry point for launching the HPX runtime system.
Function init — Main entry point for launching the HPX runtime system.
Function init — Main entry point for launching the HPX runtime system.
Function init — Main entry point for launching the HPX runtime system.
Function init — Main entry point for launching the HPX runtime system.
Function init — Main entry point for launching the HPX runtime system.
Header <hpx/hpx_start.hpp>
Function start — Main non-blocking entry point for launching the HPX runtime system.
Function start — Main non-blocking entry point for launching the HPX runtime system.
Function start — Main non-blocking entry point for launching the HPX runtime system.
Function start — Main non-blocking entry point for launching the HPX runtime system.
Function start — Main non-blocking entry point for launching the HPX runtime system.
Function start — Main non-blocking entry point for launching the HPX runtime system.
Function start — Main non-blocking entry point for launching the HPX runtime system.
Function start — Main non-blocking entry point for launching the HPX runtime system.
Function start — Main non-blocking entry point for launching the HPX runtime system.
Function start — Main non-blocking entry point for launching the HPX runtime system.
Function start — Main non-blocking entry point for launching the HPX runtime system.
Function start — Main non-blocking entry point for launching the HPX runtime system.
Function start — Main non-blocking entry point for launching the HPX runtime system.
Header <hpx/lcos/broadcast.hpp>
Function template broadcast — Perform a distributed broadcast operation.
Function template broadcast_apply — Perform an asynchronous (fire&forget) distributed broadcast operation.
Function template broadcast_with_index — Perform a distributed broadcast operation.
Function template broadcast_apply_with_index — Perform an asynchronous (fire&forget) distributed broadcast operation.
Header <hpx/lcos/fold.hpp>
Function template fold — Perform a distributed fold operation.
Function template fold_with_index — Perform a distributed folding operation.
Function template inverse_fold — Perform a distributed inverse folding operation.
Function template inverse_fold_with_index — Perform a distributed inverse folding operation.
Header <hpx/lcos/gather.hpp>
Function template gather_here
Function template gather_there
Function template gather_here
Function template gather_there
Header <hpx/lcos/wait_all.hpp>
Function template wait_all
Function template wait_all
Function template wait_all
Function template wait_all_n
Header <hpx/lcos/wait_any.hpp>
Function template wait_any
Function template wait_any
Function template wait_any
Function template wait_any
Function template wait_any_n
Header <hpx/lcos/wait_each.hpp>
Function template wait_each
Function template wait_each
Function template wait_each
Function template wait_each_n
Header <hpx/lcos/wait_some.hpp>
Function template wait_some
Function template wait_some
Function template wait_some
Function template wait_some_n
Header <hpx/lcos/when_all.hpp>
Function template when_all
Function template when_all
Function template when_all
Function template when_all_n
Header <hpx/lcos/when_any.hpp>
Struct template when_any_result
Function template when_any
Function template when_any
Function template when_any
Function template when_any_n
Header <hpx/lcos/when_each.hpp>
Function template when_each
Function template when_each
Function template when_each
Function template when_each_n
Header <hpx/lcos/when_some.hpp>
Struct template when_some_result
Function template when_some
Function template when_some
Function template when_some
Function template when_some
Function template when_some_n
Header <hpx/lcos_fwd.hpp>
Header <hpx/parallel/algorithms/adjacent_difference.hpp>
Function template adjacent_difference
Function template adjacent_difference
Header <hpx/parallel/algorithms/adjacent_find.hpp>
Function template adjacent_find
Function template adjacent_find
Header <hpx/parallel/algorithms/all_any_none.hpp>
Function template none_of
Function template any_of
Function template all_of
Header <hpx/parallel/algorithms/copy.hpp>
Function template copy
Function template copy_n
Function template copy_if
Header <hpx/parallel/container_algorithms/copy.hpp>
Function template copy
Function template copy_if
Header <hpx/parallel/algorithms/count.hpp>
Function template count
Function template count_if
Header <hpx/parallel/algorithms/equal.hpp>
Function template equal
Function template equal
Function template equal
Function template equal
Header <hpx/parallel/algorithms/exclusive_scan.hpp>
Function template exclusive_scan
Function template exclusive_scan
Header <hpx/parallel/algorithms/fill.hpp>
Function template fill
Function template fill_n
Header <hpx/parallel/algorithms/find.hpp>
Function template find
Function template find_if
Function template find_if_not
Function template find_end
Function template find_end
Function template find_first_of
Function template find_first_of
Header <hpx/parallel/algorithms/for_each.hpp>
Function template for_each_n
Function template for_each
Header <hpx/parallel/container_algorithms/for_each.hpp>
Function template for_each
Header <hpx/parallel/algorithms/generate.hpp>
Function template generate
Function template generate_n
Header <hpx/parallel/container_algorithms/generate.hpp>
Function template generate
Header <hpx/parallel/algorithms/includes.hpp>
Function template includes
Function template includes
Header <hpx/parallel/algorithms/inclusive_scan.hpp>
Function template inclusive_scan
Function template inclusive_scan
Function template inclusive_scan
Header <hpx/parallel/algorithms/inner_product.hpp>
Function template inner_product
Function template inner_product
Header <hpx/parallel/algorithms/is_partitioned.hpp>
Function template is_partitioned
Header <hpx/parallel/algorithms/is_sorted.hpp>
Function template is_sorted
Function template is_sorted
Function template is_sorted_until
Function template is_sorted_until
Header <hpx/parallel/algorithms/lexicographical_compare.hpp>
Function template lexicographical_compare
Function template lexicographical_compare
Header <hpx/parallel/algorithms/minmax.hpp>
Function template min_element
Function template max_element
Function template minmax_element
Header <hpx/parallel/container_algorithms/minmax.hpp>
Function template min_element
Function template max_element
Function template minmax_element
Header <hpx/parallel/algorithms/mismatch.hpp>
Function template mismatch
Function template mismatch
Function template mismatch
Function template mismatch
Header <hpx/parallel/algorithms/move.hpp>
Function template move
Header <hpx/parallel/algorithms/reduce.hpp>
Function template reduce
Function template reduce
Function template reduce
Header <hpx/lcos/reduce.hpp>
Function template reduce — Perform a distributed reduction operation.
Function template reduce_with_index — Perform a distributed reduction operation.
Header <hpx/parallel/algorithms/remove_copy.hpp>
Function template remove_copy
Function template remove_copy_if
Header <hpx/parallel/container_algorithms/remove_copy.hpp>
Function template remove_copy
Function template remove_copy_if
Header <hpx/parallel/algorithms/replace.hpp>
Function template replace
Function template replace_if
Function template replace_copy
Function template replace_copy_if
Header <hpx/parallel/container_algorithms/replace.hpp>
Function template replace
Function template replace_if
Function template replace_copy
Function template replace_copy_if
Header <hpx/parallel/algorithms/reverse.hpp>
Function template reverse
Function template reverse_copy
Header <hpx/parallel/container_algorithms/reverse.hpp>
Function template reverse
Function template reverse_copy
Header <hpx/parallel/algorithms/rotate.hpp>
Function template rotate
Function template rotate_copy
Header <hpx/parallel/container_algorithms/rotate.hpp>
Function template rotate
Function template rotate_copy
Header <hpx/parallel/algorithms/search.hpp>
Function template search
Function template search
Function template search_n
Function template search_n
Header <hpx/parallel/algorithms/set_difference.hpp>
Function template set_difference
Function template set_difference
Header <hpx/parallel/algorithms/set_intersection.hpp>
Function template set_intersection
Function template set_intersection
Header <hpx/parallel/algorithms/set_symmetric_difference.hpp>
Function template set_symmetric_difference
Function template set_symmetric_difference
Header <hpx/parallel/algorithms/set_union.hpp>
Function template set_union
Function template set_union
Header <hpx/parallel/container_algorithms/sort.hpp>
Function template sort
Header <hpx/parallel/algorithms/sort_by_key.hpp>
Function template sort_by_key
Header <hpx/parallel/algorithms/swap_ranges.hpp>
Function template swap_ranges
Header <hpx/parallel/algorithms/transform.hpp>
Function template transform
Function template transform
Function template transform
Header <hpx/parallel/container_algorithms/transform.hpp>
Function template transform
Function template transform
Function template transform
Header <hpx/parallel/algorithms/transform_exclusive_scan.hpp>
Function template transform_exclusive_scan
Header <hpx/parallel/algorithms/transform_inclusive_scan.hpp>
Function template transform_inclusive_scan
Function template transform_inclusive_scan
Header <hpx/parallel/algorithms/transform_reduce.hpp>
Function template transform_reduce
Header <hpx/parallel/algorithms/uninitialized_copy.hpp>
Function template uninitialized_copy
Function template uninitialized_copy_n
Header <hpx/parallel/algorithms/uninitialized_fill.hpp>
Function template uninitialized_fill
Function template uninitialized_fill_n
Header <hpx/parallel/execution_policy.hpp>
Struct template rebind_executor
Struct sequential_task_execution_policy
Struct template sequential_task_execution_policy_shim
Struct sequential_execution_policy
Struct template sequential_execution_policy_shim
Struct parallel_task_execution_policy
Struct template parallel_task_execution_policy_shim
Struct parallel_execution_policy
Struct template parallel_execution_policy_shim
Struct parallel_vector_execution_policy
Struct template is_rebound_execution_policy
Struct template is_execution_policy
Struct template is_parallel_execution_policy
Struct template is_sequential_execution_policy
Struct template is_async_execution_policy
Class execution_policy
Global task
Global seq — Default sequential execution policy object.
Global par — Default parallel execution policy object.
Global par_vec — Default vector execution policy object.
Header <hpx/parallel/executors/auto_chunk_size.hpp>
Struct auto_chunk_size
Header <hpx/parallel/executors/dynamic_chunk_size.hpp>
Struct dynamic_chunk_size
Header <hpx/parallel/executors/executor_parameter_traits.hpp>
Struct sequential_executor_parameters
Type definition executor_parameters_type
Function template variable_chunk_size
Function template get_chunk_size
Function template reset_thread_distribution
Function processing_units_count
Header <hpx/parallel/executors/executor_traits.hpp>
Struct sequential_execution_tag
Struct parallel_execution_tag
Struct vector_execution_tag
Struct template executor_traits
Header <hpx/parallel/executors/guided_chunk_size.hpp>
Struct guided_chunk_size
Header <hpx/parallel/executors/parallel_executor.hpp>
Struct parallel_executor
Header <hpx/parallel/executors/sequential_executor.hpp>
Struct sequential_executor
Header <hpx/parallel/executors/service_executors.hpp>
Struct service_executor
Header <hpx/parallel/executors/static_chunk_size.hpp>
Struct static_chunk_size
Header <hpx/parallel/executors/thread_pool_executors.hpp>
Type definition local_priority_queue_executor
Header <hpx/parallel/executors/timed_executor_traits.hpp>
Struct template timed_executor_traits
Header <hpx/parallel/task_block.hpp>
Class task_canceled_exception
Class template task_block
Function template define_task_block
Function template define_task_block
Function template define_task_block_restore_thread
Function template define_task_block_restore_thread
Header <hpx/performance_counters/manage_counter_type.hpp>
Function install_counter_type — Install a new generic performance counter type in a way, which will uninstall it automatically during shutdown.
Function install_counter_type — Install a new performance counter type in a way, which will uninstall it automatically during shutdown.
Function install_counter_type — Install a new performance counter type in a way, which will uninstall it automatically during shutdown.
Function install_counter_type — Install a new generic performance counter type in a way, which will uninstall it automatically during shutdown.
Header <hpx/runtime/actions/basic_action.hpp>
Macro HPX_REGISTER_ACTION_DECLARATION — Declare the necessary component action boilerplate code. / / The macro HPX_REGISTER_ACTION_DECLARATION can be used to declare all the / boilerplate code which is required for proper functioning of component / actions in the context of HPX. / / The parameter action is the type of the action to declare the / boilerplate for. / / This macro can be invoked with an optional second parameter. This parameter / specifies a unique name of the action to be used for serialization purposes. / The second parameter has to be specified if the first parameter is not / usable as a plain (non-qualified) C++ identifier, i.e. the first parameter / contains special characters which cannot be part of a C++ identifier, such / as '<', '>', or ':'. / /.
Macro HPX_REGISTER_ACTION — Define the necessary component action boilerplate code.
Macro HPX_REGISTER_ACTION_ID — Define the necessary component action boilerplate code and assign a predefined unique id to the action.
Header <hpx/runtime/actions/component_action.hpp>
Macro HPX_DEFINE_COMPONENT_ACTION — Registers a member function of a component as an action type with HPX.
Header <hpx/runtime/actions/plain_action.hpp>
Macro HPX_DEFINE_PLAIN_ACTION — Defines a plain action type.
Macro HPX_PLAIN_ACTION — */
Macro HPX_PLAIN_ACTION_ID — Defines a plain action type based on the given function func and registers it with HPX.
Header <hpx/runtime/agas_fwd.hpp>
Header <hpx/runtime/applier_fwd.hpp>
Function get_applier
Header <hpx/runtime/basename_registration.hpp>
Function find_all_from_basename
Function find_from_basename
Function find_from_basename — Return registered id from the given base name and sequence number.
Function register_with_basename — Register the given id using the given base name.
Function template register_with_basename
Function template register_with_basename
Function unregister_with_basename — Unregister the given id using the given base name.
Header <hpx/runtime/components/binpacking_distribution_policy.hpp>
Struct binpacking_distribution_policy
Global default_binpacking_counter_name
Global binpacked
Header <hpx/runtime/components/colocating_distribution_policy.hpp>
Struct colocating_distribution_policy
Global colocated
Header <hpx/runtime/components/component_factory.hpp>
Macro HPX_REGISTER_COMPONENT — Define a component factory for a component type.
Header <hpx/runtime/components/copy_component.hpp>
Function template copy — Copy given component to the specified target locality.
Function template copy — Copy given component to the specified target locality.
Function template copy — Copy given component to the specified target locality.
Header <hpx/runtime/components/default_distribution_policy.hpp>
Struct default_distribution_policy
Global default_layout
Header <hpx/runtime/components/migrate_component.hpp>
Function template migrate
Function template migrate
Function template migrate
Function template migrate
Header <hpx/runtime/components/new.hpp>
Function template new_ — Create one or more new instances of the given Component type on the specified locality.
Function template new_ — Create multiple new instances of the given Component type on the specified locality.
Function template new_ — Create one or more new instances of the given Component type based on the given distribution policy.
Function template new_ — Create multiple new instances of the given Component type on the localities as defined by the given distribution policy.
Header <hpx/runtime/components_fwd.hpp>
Header <hpx/runtime/find_here.hpp>
Function find_here — Return the global id representing this locality.
Header <hpx/runtime/get_locality_id.hpp>
Function get_locality_id — Return the number of the locality this function is being called from.
Header <hpx/runtime/get_locality_name.hpp>
Function get_locality_name — Return the name of the locality this function is called on.
Function get_locality_name — Return the name of the referenced locality.
Header <hpx/runtime/get_os_thread_count.hpp>
Function get_os_thread_count — Return the number of worker OS- threads used by the given executor to execute HPX threads.
Header <hpx/runtime/get_ptr.hpp>
Function template get_ptr — Returns a future referring to a the pointer to the underlying memory of a component.
Function template get_ptr_sync — Returns the pointer to the underlying memory of a component.
Header <hpx/runtime/get_worker_thread_num.hpp>
Function get_worker_thread_num — Return the number of the current OS-thread running in the runtime instance the current HPX-thread is executed with.
Header <hpx/runtime/launch_policy.hpp>
Type launch
Header <hpx/runtime/naming/unmanaged.hpp>
Function unmanaged
Header <hpx/runtime/naming_fwd.hpp>
Global invalid_locality_id
Header <hpx/runtime/parcelset_fwd.hpp>
Header <hpx/runtime/runtime_mode.hpp>
Type runtime_mode
Function get_runtime_mode_name
Header <hpx/runtime/set_parcel_write_handler.hpp>
Type definition parcel_write_handler_type
Function set_parcel_write_handler
Header <hpx/runtime/threads/thread_data_fwd.hpp>
Function get_self
Function get_self_ptr
Function get_ctx_ptr
Function get_self_ptr_checked
Function get_self_id
Function get_parent_id
Function get_parent_phase
Function get_parent_locality_id
Function get_self_component_id
Function get_thread_manager
Function get_thread_count
Header <hpx/runtime/threads/thread_enums.hpp>
Type thread_state_enum
Type thread_priority
Type thread_state_ex_enum
Header <hpx/runtime/threads_fwd.hpp>
Header <hpx/runtime/trigger_lco.hpp>
Function trigger_lco_event — Trigger the LCO referenced by the given id.
Function trigger_lco_event — Trigger the LCO referenced by the given id.
Function trigger_lco_event — Trigger the LCO referenced by the given id.
Function trigger_lco_event — Trigger the LCO referenced by the given id.
Function template set_lco_value — Set the result value for the LCO referenced by the given id.
Function template set_lco_value — Set the result value for the LCO referenced by the given id.
Function template set_lco_value — Set the result value for the LCO referenced by the given id.
Function template set_lco_value — Set the result value for the LCO referenced by the given id.
Function set_lco_error — Set the error state for the LCO referenced by the given id.
Function set_lco_error — Set the error state for the LCO referenced by the given id. (naming::id_type const& id,
Function set_lco_error — Set the error state for the LCO referenced by the given id.
Function set_lco_error — Set the error state for the LCO referenced by the given id. (naming::id_type const& id,
Function set_lco_error — Set the error state for the LCO referenced by the given id.
Function set_lco_error — Set the error state for the LCO referenced by the given id. (naming::id_type const& id,
Function set_lco_error — Set the error state for the LCO referenced by the given id.
Function set_lco_error — Set the error state for the LCO referenced by the given id. (naming::id_type const& id,
Header <hpx/runtime_fwd.hpp>
Function get_runtime
Function get_runtime_instance_number
namespace hpx {
  namespace components {
    template<typename Component> 
      future< naming::id_type > 
      migrate_from_storage(naming::id_type const &, 
                           naming::id_type const & = naming::invalid_id);
  }
}

Function template migrate_from_storage

hpx::components::migrate_from_storage

Synopsis

// In header: <hpx/components/component_storage/migrate_from_storage.hpp>


template<typename Component> 
  future< naming::id_type > 
  migrate_from_storage(naming::id_type const & to_resurrect, 
                       naming::id_type const & target = naming::invalid_id);

Description

Migrate the component with the given id from the specified target storage (resurrect the object)

The function migrate_from_storage<Component> will migrate the component referenced by to_resurrect from the storage facility specified where the object is currently stored on. It returns a future referring to the migrated component instance. The component instance is resurrected on the locality specified by target_locality.

Parameters:

target

[in] The optional locality to resurrect the object on. By default the object is resurrected on the locality it was located on last.

to_resurrect

[in] The global id of the component to migrate.

Returns:

A future representing the global id of the migrated component instance. This should be the same as to_resurrect.

namespace hpx {
  namespace components {
    template<typename Component> 
      future< naming::id_type > 
      migrate_to_storage(naming::id_type const &, naming::id_type const &);
    template<typename Derived, typename Stub> 
      Derived migrate_to_storage(client_base< Derived, Stub > const &, 
                                 hpx::components::component_storage const &);
  }
}

Function template migrate_to_storage

hpx::components::migrate_to_storage

Synopsis

// In header: <hpx/components/component_storage/migrate_to_storage.hpp>


template<typename Component> 
  future< naming::id_type > 
  migrate_to_storage(naming::id_type const & to_migrate, 
                     naming::id_type const & target_storage);

Description

Migrate the component with the given id to the specified target storage

The function migrate_to_storage<Component> will migrate the component referenced by to_migrate to the storage facility specified with target_storage. It returns a future referring to the migrated component instance.

Parameters:

target_storage

[in] The id of the storage facility to migrate this object to.

to_migrate

[in] The global id of the component to migrate.

Returns:

A future representing the global id of the migrated component instance. This should be the same as migrate_to.


Function template migrate_to_storage

hpx::components::migrate_to_storage

Synopsis

// In header: <hpx/components/component_storage/migrate_to_storage.hpp>


template<typename Derived, typename Stub> 
  Derived migrate_to_storage(client_base< Derived, Stub > const & to_migrate, 
                             hpx::components::component_storage const & target_storage);

Description

Migrate the given component to the specified target storage

The function migrate_to_storage will migrate the component referenced by to_migrate to the storage facility specified with target_storage. It returns a future referring to the migrated component instance.

Parameters:

target_storage

[in] The id of the storage facility to migrate this object to.

to_migrate

[in] The client side representation of the component to migrate.

Returns:

A client side representation of representing of the migrated component instance. This should be the same as migrate_to.

Header <hpx/error.hpp>

namespace hpx {
  enum error;
}

Type error

hpx::error — Possible error conditions.

Synopsis

// In header: <hpx/error.hpp>


enum error { success =  0, no_success =  1, not_implemented =  2, 
             out_of_memory =  3, bad_action_code =  4, 
             bad_component_type =  5, network_error =  6, 
             version_too_new =  7, version_too_old =  8, version_unknown =  9, 
             unknown_component_address =  10, 
             duplicate_component_address =  11, invalid_status =  12, 
             bad_parameter =  13, internal_server_error =  14, 
             service_unavailable =  15, bad_request =  16, 
             repeated_request =  17, lock_error =  18, 
             duplicate_console =  19, no_registered_console =  20, 
             startup_timed_out =  21, uninitialized_value =  22, 
             bad_response_type =  23, deadlock =  24, assertion_failure =  25, 
             null_thread_id =  26, invalid_data =  27, yield_aborted =  28, 
             dynamic_link_failure =  29, commandline_option_error =  30, 
             serialization_error =  31, unhandled_exception =  32, 
             kernel_error =  33, broken_task =  34, task_moved =  35, 
             task_already_started =  36, future_already_retrieved =  37, 
             promise_already_satisfied =  38, 
             future_does_not_support_cancellation =  39, 
             future_can_not_be_cancelled =  40, no_state =  41, 
             broken_promise =  42, thread_resource_error =  43, 
             future_cancelled =  44, thread_cancelled =  45, 
             thread_not_interruptable =  46, duplicate_component_id =  47, 
             unknown_error =  48, bad_plugin_type =  49, security_error =  50, 
             filesystem_error =  51, bad_function_call =  52, 
             task_canceled_exception =  53, task_block_not_active =  54, 
             out_of_range =  55, length_error =  56 };

Description

This enumeration lists all possible error conditions which can be reported from any of the API functions.

success
The operation was successful.
no_success
The operation did failed, but not in an unexpected manner.
not_implemented
The operation is not implemented.
out_of_memory
The operation caused a out of memory condition.
bad_component_type
The specified component type is not known or otherwise invalid.
network_error
A generic network error occurred.
version_too_new
The version of the network representation for this object is too new.
version_too_old
The version of the network representation for this object is too old.
version_unknown
The version of the network representation for this object is unknown.
duplicate_component_address
The given global id has already been registered.
invalid_status
The operation was executed in an invalid status.
bad_parameter
One of the supplied parameters is invalid.
duplicate_console
There is more than one console locality.
no_registered_console
There is no registered console locality available.
null_thread_id
Attempt to invoke a API function from a non-HPX thread.
yield_aborted
The yield operation was aborted.
commandline_option_error
One of the options given on the command line is erroneous.
serialization_error
There was an error during serialization of this object.
unhandled_exception
An unhandled exception has been caught.
kernel_error
The OS kernel reported an error.
broken_task
The task associated with this future object is not available anymore.
task_moved
The task associated with this future object has been moved.
task_already_started
The task associated with this future object has already been started.
future_already_retrieved
The future object has already been retrieved.
promise_already_satisfied
The value for this future object has already been set.
future_does_not_support_cancellation
The future object does not support cancellation.
future_can_not_be_cancelled
The future can't be canceled at this time.
no_state
The future object has no valid shared state.
broken_promise
The promise has been deleted.
duplicate_component_id
The component type has already been registered.
unknown_error
An unknown error occurred.
bad_plugin_type
The specified plugin type is not known or otherwise invalid.
security_error
An error occurred in the security component.
filesystem_error
The specified file does not exist or other filesystem related error.
bad_function_call
equivalent of std::bad_function_call
task_canceled_exception
parallel::v2::task_canceled_exception
task_block_not_active
task_region is not active
out_of_range
Equivalent to std::out_of_range.
length_error
Equivalent to std::length_error.

HPX_THROW_EXCEPTION(errcode, f, msg)
HPX_THROWS_IF(ec, errcode, f, msg)
namespace hpx {
  class error_code;
  class exception;

  struct thread_interrupted;

  // Encode error category for new error_code. 
  enum throwmode { plain =  0, rethrow =  1, lightweight =  0x80 };

  // Returns a new error_code constructed from the given parameters. 
  error_code make_error_code(error e, throwmode mode = plain);
  error_code make_error_code(error e, char const * func, char const * file, 
                             long line, throwmode mode = plain);

  // Returns error_code(e, msg, mode). 
  error_code make_error_code(error e, char const * msg, 
                             throwmode mode = plain);
  error_code make_error_code(error e, char const * msg, char const * func, 
                             char const * file, long line, 
                             throwmode mode = plain);

  // Returns error_code(e, msg, mode). 
  error_code make_error_code(error e, std::string const & msg, 
                             throwmode mode = plain);
  error_code make_error_code(error e, std::string const & msg, 
                             char const * func, char const * file, long line, 
                             throwmode mode = plain);
  error_code make_error_code(boost::exception_ptr const & e);

  // Returns generic HPX error category used for new errors. 
  boost::system::error_category const & get_hpx_category();

  // Returns generic HPX error category used for errors re-thrown after the exception has been de-serialized. 
  boost::system::error_category const & get_hpx_rethrow_category();

  // Returns error_code(hpx::success, "success", mode). 
  error_code make_success_code(throwmode mode = plain);
  std::string diagnostic_information(hpx::exception const &);
  std::string diagnostic_information(hpx::error_code const &);
  std::string get_error_what(hpx::exception const &);
  std::string get_error_what(hpx::error_code const &);
  boost::uint32_t get_error_locality_id(hpx::exception const &);
  boost::uint32_t get_error_locality_id(hpx::error_code const &);
  error get_error(hpx::exception const &);
  error get_error(hpx::error_code const &);
  std::string get_error_host_name(hpx::exception const &);
  std::string get_error_host_name(hpx::error_code const &);
  boost::int64_t get_error_process_id(hpx::exception const &);
  boost::int64_t get_error_process_id(hpx::error_code const &);
  std::string get_error_env(hpx::exception const &);
  std::string get_error_env(hpx::error_code const &);
  std::string get_error_function_name(hpx::exception const &);
  std::string get_error_function_name(hpx::error_code const &);
  std::string get_error_backtrace(hpx::exception const &);
  std::string get_error_backtrace(hpx::error_code const &);
  std::string get_error_file_name(hpx::exception const &);
  std::string get_error_file_name(hpx::error_code const &);
  int get_error_line_number(hpx::exception const &);
  int get_error_line_number(hpx::error_code const &);
  std::size_t get_error_os_thread(hpx::exception const &);
  std::size_t get_error_os_thread(hpx::error_code const &);
  std::size_t get_error_thread_id(hpx::exception const &);
  std::size_t get_error_thread_id(hpx::error_code const &);
  std::string get_error_thread_description(hpx::exception const &);
  std::string get_error_thread_description(hpx::error_code const &);
  std::string get_error_config(hpx::exception const &);
  std::string get_error_config(hpx::error_code const &);
  std::string get_error_state(hpx::exception const &);
  std::string get_error_state(hpx::error_code const &);
}

Class error_code

hpx::error_code — A hpx::error_code represents an arbitrary error condition.

Synopsis

// In header: <hpx/exception.hpp>


class error_code {
public:
  // construct/copy/destruct
  explicit error_code(throwmode = plain);
  explicit error_code(error, throwmode = plain);
  error_code(error, char const *, char const *, long, throwmode = plain);
  error_code(error, char const *, throwmode = plain);
  error_code(error, char const *, char const *, char const *, long, 
             throwmode = plain);
  error_code(error, std::string const &, throwmode = plain);
  error_code(error, std::string const &, char const *, char const *, long, 
             throwmode = plain);
  error_code(int, hpx::exception const &);
  explicit error_code(boost::exception_ptr const &);
  error_code & operator=(error_code const &);

  // public member functions
  std::string get_message() const;
  void clear();
};

Description

The class hpx::error_code describes an object used to hold error code values, such as those originating from the operating system or other low-level application program interfaces.

[Note]Note

Class hpx::error_code is an adjunct to error reporting by exception

error_code public construct/copy/destruct

  1. explicit error_code(throwmode mode = plain);

    Construct an object of type error_code.

    Parameters:

    mode

    The parameter mode specifies whether the constructed hpx::error_code belongs to the error category hpx_category (if mode is plain, this is the default) or to the category hpx_category_rethrow (if mode is rethrow).

    Throws:

    nothing
  2. explicit error_code(error e, throwmode mode = plain);

    Construct an object of type error_code.

    Parameters:

    e

    The parameter e holds the hpx::error code the new exception should encapsulate.

    mode

    The parameter mode specifies whether the constructed hpx::error_code belongs to the error category hpx_category (if mode is plain, this is the default) or to the category hpx_category_rethrow (if mode is rethrow).

    Throws:

    nothing
  3. error_code(error e, char const * func, char const * file, long line, 
               throwmode mode = plain);

    Construct an object of type error_code.

    Parameters:

    e

    The parameter e holds the hpx::error code the new exception should encapsulate.

    file

    The file name of the code where the error was raised.

    func

    The name of the function where the error was raised.

    line

    The line number of the code line where the error was raised.

    mode

    The parameter mode specifies whether the constructed hpx::error_code belongs to the error category hpx_category (if mode is plain, this is the default) or to the category hpx_category_rethrow (if mode is rethrow).

    Throws:

    nothing
  4. error_code(error e, char const * msg, throwmode mode = plain);

    Construct an object of type error_code.

    Parameters:

    e

    The parameter e holds the hpx::error code the new exception should encapsulate.

    mode

    The parameter mode specifies whether the constructed hpx::error_code belongs to the error category hpx_category (if mode is plain, this is the default) or to the category hpx_category_rethrow (if mode is rethrow).

    msg

    The parameter msg holds the error message the new exception should encapsulate.

    Throws:

    std::bad_alloc (if allocation of a copy of the passed string fails).
  5. error_code(error e, char const * msg, char const * func, char const * file, 
               long line, throwmode mode = plain);

    Construct an object of type error_code.

    Parameters:

    e

    The parameter e holds the hpx::error code the new exception should encapsulate.

    file

    The file name of the code where the error was raised.

    func

    The name of the function where the error was raised.

    line

    The line number of the code line where the error was raised.

    mode

    The parameter mode specifies whether the constructed hpx::error_code belongs to the error category hpx_category (if mode is plain, this is the default) or to the category hpx_category_rethrow (if mode is rethrow).

    msg

    The parameter msg holds the error message the new exception should encapsulate.

    Throws:

    std::bad_alloc (if allocation of a copy of the passed string fails).
  6. error_code(error e, std::string const & msg, throwmode mode = plain);

    Construct an object of type error_code.

    Parameters:

    e

    The parameter e holds the hpx::error code the new exception should encapsulate.

    mode

    The parameter mode specifies whether the constructed hpx::error_code belongs to the error category hpx_category (if mode is plain, this is the default) or to the category hpx_category_rethrow (if mode is rethrow).

    msg

    The parameter msg holds the error message the new exception should encapsulate.

    Throws:

    std::bad_alloc (if allocation of a copy of the passed string fails).
  7. error_code(error e, std::string const & msg, char const * func, 
               char const * file, long line, throwmode mode = plain);

    Construct an object of type error_code.

    Parameters:

    e

    The parameter e holds the hpx::error code the new exception should encapsulate.

    file

    The file name of the code where the error was raised.

    func

    The name of the function where the error was raised.

    line

    The line number of the code line where the error was raised.

    mode

    The parameter mode specifies whether the constructed hpx::error_code belongs to the error category hpx_category (if mode is plain, this is the default) or to the category hpx_category_rethrow (if mode is rethrow).

    msg

    The parameter msg holds the error message the new exception should encapsulate.

    Throws:

    std::bad_alloc (if allocation of a copy of the passed string fails).
  8. error_code(int err, hpx::exception const & e);
  9. explicit error_code(boost::exception_ptr const & e);
  10. error_code & operator=(error_code const & rhs);

    Assignment operator for error_code

    [Note]Note

    This function maintains the error category of the left hand side if the right hand side is a success code.

error_code public member functions

  1. std::string get_message() const;

    Return a reference to the error message stored in the hpx::error_code.

    Throws:

    nothing
  2. void clear();
    Clear this error_code object. The postconditions of invoking this method are.
    • value() == hpx::success and category() == hpx::get_hpx_category()


Class exception

hpx::exception — A hpx::exception is the main exception type used by HPX to report errors.

Synopsis

// In header: <hpx/exception.hpp>


class exception {
public:
  // construct/copy/destruct
  explicit exception(error = success);
  explicit exception(boost::system::system_error const &);
  exception(error, char const *, throwmode = plain);
  exception(error, std::string const &, throwmode = plain);
  ~exception();

  // public member functions
  error get_error() const;
  error_code get_error_code(throwmode = plain) const;
};

Description

The hpx::exception type is the main exception type used by HPX to report errors. Any exceptions thrown by functions in the HPX library are either of this type or of a type derived from it. This implies that it is always safe to use this type only in catch statements guarding HPX library calls.

exception public construct/copy/destruct

  1. explicit exception(error e = success);

    Construct a hpx::exception from a hpx::error.

    Parameters:

    e

    The parameter e holds the hpx::error code the new exception should encapsulate.

  2. explicit exception(boost::system::system_error const & e);
    Construct a hpx::exception from a boost::system_error.
  3. exception(error e, char const * msg, throwmode mode = plain);

    Construct a hpx::exception from a hpx::error and an error message.

    Parameters:

    e

    The parameter e holds the hpx::error code the new exception should encapsulate.

    mode

    The parameter mode specifies whether the returned hpx::error_code belongs to the error category hpx_category (if mode is plain, this is the default) or to the category hpx_category_rethrow (if mode is rethrow).

    msg

    The parameter msg holds the error message the new exception should encapsulate.

  4. exception(error e, std::string const & msg, throwmode mode = plain);

    Construct a hpx::exception from a hpx::error and an error message.

    Parameters:

    e

    The parameter e holds the hpx::error code the new exception should encapsulate.

    mode

    The parameter mode specifies whether the returned hpx::error_code belongs to the error category hpx_category (if mode is plain, this is the default) or to the category hpx_category_rethrow (if mode is rethrow).

    msg

    The parameter msg holds the error message the new exception should encapsulate.

  5. ~exception();

    Destruct a hpx::exception

    Throws:

    nothing

exception public member functions

  1. error get_error() const;

    The function get_error() returns the hpx::error code stored in the referenced instance of a hpx::exception. It returns the hpx::error code this exception instance was constructed from.

    Throws:

    nothing
  2. error_code get_error_code(throwmode mode = plain) const;

    The function get_error_code() returns a hpx::error_code which represents the same error condition as this hpx::exception instance.

    Parameters:

    mode

    The parameter mode specifies whether the returned hpx::error_code belongs to the error category hpx_category (if mode is plain, this is the default) or to the category hpx_category_rethrow (if mode is rethrow).


Struct thread_interrupted

hpx::thread_interrupted — A hpx::thread_interrupted is the exception type used by HPX to interrupt a running HPX thread.

Synopsis

// In header: <hpx/exception.hpp>


struct thread_interrupted {
};

Description

The \a hpx::thread_interrupted type is the exception type used by HPX to interrupt a running thread. A running thread can be interrupted by invoking the interrupt() member function of the corresponding hpx::thread object. When the interrupted thread next executes one of the specified interruption points (or if it is currently blocked whilst executing one) with interruption enabled, then a hpx::thread_interrupted exception will be thrown in the interrupted thread. If not caught, this will cause the execution of the interrupted thread to terminate. As with any other exception, the stack will be unwound, and destructors for objects of automatic storage duration will be executed. If a thread wishes to avoid being interrupted, it can create an instance of \a hpx::this_thread::disable_interruption. Objects of this class disable interruption for the thread that created them on construction, and restore the interruption state to whatever it was before on destruction. @code void f() { // interruption enabled here { hpx::this_thread::disable_interruption di; // interruption disabled { hpx::this_thread::disable_interruption di2; // interruption still disabled } // di2 destroyed, interruption state restored // interruption still disabled } // di destroyed, interruption state restored // interruption now enabled } The effects of an instance of \a hpx::this_thread::disable_interruption can be temporarily reversed by constructing an instance of \a hpx::this_thread::restore_interruption, passing in the \a hpx::this_thread::disable_interruption object in question. This will restore the interruption state to what it was when the \a hpx::this_thread::disable_interruption object was constructed, and then disable interruption again when the \a hpx::this_thread::restore_interruption object is destroyed. @code void g() { // interruption enabled here { hpx::this_thread::disable_interruption di; // interruption disabled { hpx::this_thread::restore_interruption ri(di); // interruption now enabled } // ri destroyed, interruption disable again } // di destroyed, interruption state restored // interruption now enabled } At any point, the interruption state for the current thread can be queried by calling \a hpx::this_thread::interruption_enabled().


Function diagnostic_information

hpx::diagnostic_information — Extract the diagnostic information embedded in the given exception and return a string holding a formatted message.

Synopsis

// In header: <hpx/exception.hpp>


std::string diagnostic_information(hpx::exception const & e);

Description

The function hpx::diagnostic_information can be used to extract all diagnostic information stored in the given exception instance as a formatted string. This simplifies debug output as it composes the diagnostics into one, easy to use function call. This includes the name of the source file and line number, the sequence number of the OS-thread and the HPX-thread id, the locality id and the stack backtrace of the point where the original exception was thrown.

See Also:

hpx::get_error_locality_id(), hpx::get_error_host_name(), hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_thread_description(), hpx::get_error(), hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error_what(), hpx::get_error_config(), hpx::get_error_state()

Parameters:

e

The parameter e will be inspected for all diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception or hpx::error_code.

Returns:

The formatted string holding all of the available diagnostic information stored in the given exception instance.

Throws:

std::bad_alloc (if any of the required allocation operations fail)

Function diagnostic_information

hpx::diagnostic_information — Extract the diagnostic information embedded in the given exception and return a string holding a formatted message.

Synopsis

// In header: <hpx/exception.hpp>


std::string diagnostic_information(hpx::error_code const & e);

Description

The function hpx::diagnostic_information can be used to extract all diagnostic information stored in the given exception instance as a formatted string. This simplifies debug output as it composes the diagnostics into one, easy to use function call. This includes the name of the source file and line number, the sequence number of the OS-thread and the HPX-thread id, the locality id and the stack backtrace of the point where the original exception was thrown.

See Also:

hpx::get_error_locality_id(), hpx::get_error_host_name(), hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_thread_description(), hpx::get_error(), hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error_what(), hpx::get_error_config(), hpx::get_error_state()

Parameters:

e

The parameter e will be inspected for all diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception or hpx::error_code.


Function get_error_what

hpx::get_error_what — Return the error message of the thrown exception.

Synopsis

// In header: <hpx/exception.hpp>


std::string get_error_what(hpx::exception const & e);

Description

The function hpx::get_error_what can be used to extract the diagnostic information element representing the error message as stored in the given exception instance.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_thread_description(), hpx::get_error() hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error_config(), hpx::get_error_state()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.

Returns:

The error message stored in the exception If the exception instance does not hold this information, the function will return an empty string.

Throws:

std::bad_alloc (if one of the required allocations fails)

Function get_error_what

hpx::get_error_what — Return the locality id where the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


std::string get_error_what(hpx::error_code const & e);

Description

The function hpx::get_error_locality_id can be used to extract the diagnostic information element representing the locality id as stored in the given exception instance.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_thread_description(), hpx::get_error(), hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error_what(), hpx::get_error_config(), hpx::get_error_state()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.


Function get_error_locality_id

hpx::get_error_locality_id — Return the locality id where the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


boost::uint32_t get_error_locality_id(hpx::exception const & e);

Description

The function hpx::get_error_locality_id can be used to extract the diagnostic information element representing the locality id as stored in the given exception instance.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_thread_description(), hpx::get_error(), hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error_what(), hpx::get_error_config(), hpx::get_error_state()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.

Returns:

The locality id of the locality where the exception was thrown. If the exception instance does not hold this information, the function will return hpx::naming::invalid_locality_id.

Throws:

nothing

Function get_error_locality_id

hpx::get_error_locality_id — Return the locality id where the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


boost::uint32_t get_error_locality_id(hpx::error_code const & e);

Description

The function hpx::get_error_locality_id can be used to extract the diagnostic information element representing the locality id as stored in the given exception instance.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_thread_description(), hpx::get_error(), hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error_what(), hpx::get_error_config(), hpx::get_error_state()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.


Function get_error

hpx::get_error — Return the locality id where the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


error get_error(hpx::exception const & e);

Description

The function hpx::get_error can be used to extract the diagnostic information element representing the error value code as stored in the given exception instance.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_thread_description(), hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error_what(), hpx::get_error_config(), hpx::get_error_state()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, or boost::exception_ptr.

Returns:

The error value code of the locality where the exception was thrown. If the exception instance does not hold this information, the function will return hpx::naming::invalid_locality_id.

Throws:

nothing

Function get_error

hpx::get_error — Return the locality id where the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


error get_error(hpx::error_code const & e);

Description

The function hpx::get_error can be used to extract the diagnostic information element representing the error value code as stored in the given exception instance.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_thread_description(), hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error_what(), hpx::get_error_config(), hpx::get_error_state()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, or boost::exception_ptr.


Function get_error_host_name

hpx::get_error_host_name — Return the hostname of the locality where the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


std::string get_error_host_name(hpx::exception const & e);

Description

The function hpx::get_error_host_name can be used to extract the diagnostic information element representing the host name as stored in the given exception instance.

See Also:

hpx::diagnostic_information() hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_thread_description(), hpx::get_error() hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error_what(), hpx::get_error_config(), hpx::get_error_state()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.

Returns:

The hostname of the locality where the exception was thrown. If the exception instance does not hold this information, the function will return and empty string.

Throws:

std::bad_alloc (if one of the required allocations fails)

Function get_error_host_name

hpx::get_error_host_name — Return the hostname of the locality where the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


std::string get_error_host_name(hpx::error_code const & e);

Description

The function hpx::get_error_host_name can be used to extract the diagnostic information element representing the host name as stored in the given exception instance.

See Also:

hpx::diagnostic_information() hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_thread_description(), hpx::get_error() hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error_what(), hpx::get_error_config(), hpx::get_error_state()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.


Function get_error_process_id

hpx::get_error_process_id — Return the (operating system) process id of the locality where the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


boost::int64_t get_error_process_id(hpx::exception const & e);

Description

The function hpx::get_error_process_id can be used to extract the diagnostic information element representing the process id as stored in the given exception instance.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_function_name(), hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_thread_description(), hpx::get_error(), hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error_what(), hpx::get_error_config(), hpx::get_error_state()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.

Returns:

The process id of the OS-process which threw the exception If the exception instance does not hold this information, the function will return 0.

Throws:

nothing

Function get_error_process_id

hpx::get_error_process_id — Return the (operating system) process id of the locality where the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


boost::int64_t get_error_process_id(hpx::error_code const & e);

Description

The function hpx::get_error_process_id can be used to extract the diagnostic information element representing the process id as stored in the given exception instance.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_function_name(), hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_thread_description(), hpx::get_error(), hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error_what(), hpx::get_error_config(), hpx::get_error_state()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.


Function get_error_env

hpx::get_error_env — Return the environment of the OS-process at the point the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


std::string get_error_env(hpx::exception const & e);

Description

The function hpx::get_error_env can be used to extract the diagnostic information element representing the environment of the OS-process collected at the point the exception was thrown.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_thread_description(), hpx::get_error(), hpx::get_error_backtrace(), hpx::get_error_what(), hpx::get_error_config(), hpx::get_error_state()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.

Returns:

The environment from the point the exception was thrown. If the exception instance does not hold this information, the function will return an empty string.

Throws:

std::bad_alloc (if one of the required allocations fails)

Function get_error_env

hpx::get_error_env — Return the environment of the OS-process at the point the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


std::string get_error_env(hpx::error_code const & e);

Description

The function hpx::get_error_env can be used to extract the diagnostic information element representing the environment of the OS-process collected at the point the exception was thrown.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_thread_description(), hpx::get_error(), hpx::get_error_backtrace(), hpx::get_error_what(), hpx::get_error_config(), hpx::get_error_state()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.


Function get_error_function_name

hpx::get_error_function_name — Return the function name from which the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


std::string get_error_function_name(hpx::exception const & e);

Description

The function hpx::get_error_function_name can be used to extract the diagnostic information element representing the name of the function as stored in the given exception instance.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_process_id() hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_thread_description(), hpx::get_error(), hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error_what(), hpx::get_error_config(), hpx::get_error_state()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.

Returns:

The name of the function from which the exception was thrown. If the exception instance does not hold this information, the function will return an empty string.

Throws:

std::bad_alloc (if one of the required allocations fails)

Function get_error_function_name

hpx::get_error_function_name — Return the function name from which the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


std::string get_error_function_name(hpx::error_code const & e);

Description

The function hpx::get_error_function_name can be used to extract the diagnostic information element representing the name of the function as stored in the given exception instance.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_process_id() hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_thread_description(), hpx::get_error(), hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error_what(), hpx::get_error_config(), hpx::get_error_state()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.


Function get_error_backtrace

hpx::get_error_backtrace — Return the stack backtrace from the point the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


std::string get_error_backtrace(hpx::exception const & e);

Description

The function hpx::get_error_backtrace can be used to extract the diagnostic information element representing the stack backtrace collected at the point the exception was thrown.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_thread_description(), hpx::get_error(), hpx::get_error_env(), hpx::get_error_what(), hpx::get_error_config(), hpx::get_error_state()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.

Returns:

The stack back trace from the point the exception was thrown. If the exception instance does not hold this information, the function will return an empty string.

Throws:

std::bad_alloc (if one of the required allocations fails)

Function get_error_backtrace

hpx::get_error_backtrace — Return the stack backtrace from the point the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


std::string get_error_backtrace(hpx::error_code const & e);

Description

The function hpx::get_error_backtrace can be used to extract the diagnostic information element representing the stack backtrace collected at the point the exception was thrown.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_thread_description(), hpx::get_error(), hpx::get_error_env(), hpx::get_error_what(), hpx::get_error_config(), hpx::get_error_state()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.


Function get_error_file_name

hpx::get_error_file_name — Return the (source code) file name of the function from which the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


std::string get_error_file_name(hpx::exception const & e);

Description

The function hpx::get_error_file_name can be used to extract the diagnostic information element representing the name of the source file as stored in the given exception instance.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_line_number(), hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_thread_description(), hpx::get_error(), hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error_what(), hpx::get_error_config(), hpx::get_error_state()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.

Returns:

The name of the source file of the function from which the exception was thrown. If the exception instance does not hold this information, the function will return an empty string.

Throws:

std::bad_alloc (if one of the required allocations fails)

Function get_error_file_name

hpx::get_error_file_name — Return the (source code) file name of the function from which the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


std::string get_error_file_name(hpx::error_code const & e);

Description

The function hpx::get_error_file_name can be used to extract the diagnostic information element representing the name of the source file as stored in the given exception instance.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_line_number(), hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_thread_description(), hpx::get_error(), hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error_what(), hpx::get_error_config(), hpx::get_error_state()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.


Function get_error_line_number

hpx::get_error_line_number — Return the line number in the (source code) file of the function from which the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


int get_error_line_number(hpx::exception const & e);

Description

The function hpx::get_error_line_number can be used to extract the diagnostic information element representing the line number as stored in the given exception instance.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_file_name() hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_thread_description(), hpx::get_error(), hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error_what(), hpx::get_error_config(), hpx::get_error_state()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.

Returns:

The line number of the place where the exception was thrown. If the exception instance does not hold this information, the function will return -1.

Throws:

nothing

Function get_error_line_number

hpx::get_error_line_number — Return the line number in the (source code) file of the function from which the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


int get_error_line_number(hpx::error_code const & e);

Description

The function hpx::get_error_line_number can be used to extract the diagnostic information element representing the line number as stored in the given exception instance.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_file_name() hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_thread_description(), hpx::get_error(), hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error_what(), hpx::get_error_config(), hpx::get_error_state()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.


Function get_error_os_thread

hpx::get_error_os_thread — Return the sequence number of the OS-thread used to execute HPX-threads from which the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


std::size_t get_error_os_thread(hpx::exception const & e);

Description

The function hpx::get_error_os_thread can be used to extract the diagnostic information element representing the sequence number of the OS-thread as stored in the given exception instance.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_thread_id(), hpx::get_error_thread_description(), hpx::get_error(), hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error_what(), hpx::get_error_config(), hpx::get_error_state()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.

Returns:

The sequence number of the OS-thread used to execute the HPX-thread from which the exception was thrown. If the exception instance does not hold this information, the function will return std::size(-1).

Throws:

nothing

Function get_error_os_thread

hpx::get_error_os_thread — Return the sequence number of the OS-thread used to execute HPX-threads from which the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


std::size_t get_error_os_thread(hpx::error_code const & e);

Description

The function hpx::get_error_os_thread can be used to extract the diagnostic information element representing the sequence number of the OS-thread as stored in the given exception instance.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_thread_id(), hpx::get_error_thread_description(), hpx::get_error(), hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error_what(), hpx::get_error_config(), hpx::get_error_state()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.


Function get_error_thread_id

hpx::get_error_thread_id — Return the unique thread id of the HPX-thread from which the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


std::size_t get_error_thread_id(hpx::exception const & e);

Description

The function hpx::get_error_thread_id can be used to extract the diagnostic information element representing the HPX-thread id as stored in the given exception instance.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_os_thread() hpx::get_error_thread_description(), hpx::get_error(), hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error_what(), hpx::get_error_config(), hpx::get_error_state()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.

Returns:

The unique thread id of the HPX-thread from which the exception was thrown. If the exception instance does not hold this information, the function will return std::size_t(0).

Throws:

nothing

Function get_error_thread_id

hpx::get_error_thread_id — Return the unique thread id of the HPX-thread from which the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


std::size_t get_error_thread_id(hpx::error_code const & e);

Description

The function hpx::get_error_thread_id can be used to extract the diagnostic information element representing the HPX-thread id as stored in the given exception instance.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_os_thread() hpx::get_error_thread_description(), hpx::get_error(), hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error_what(), hpx::get_error_config(), hpx::get_error_state()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.


Function get_error_thread_description

hpx::get_error_thread_description — Return any additionally available thread description of the HPX-thread from which the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


std::string get_error_thread_description(hpx::exception const & e);

Description

The function hpx::get_error_thread_description can be used to extract the diagnostic information element representing the additional thread description as stored in the given exception instance.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error(), hpx::get_error_state(), hpx::get_error_what(), hpx::get_error_config()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.

Returns:

Any additionally available thread description of the HPX-thread from which the exception was thrown. If the exception instance does not hold this information, the function will return an empty string.

Throws:

std::bad_alloc (if one of the required allocations fails)

Function get_error_thread_description

hpx::get_error_thread_description — Return any additionally available thread description of the HPX-thread from which the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


std::string get_error_thread_description(hpx::error_code const & e);

Description

The function hpx::get_error_thread_description can be used to extract the diagnostic information element representing the additional thread description as stored in the given exception instance.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error(), hpx::get_error_state(), hpx::get_error_what(), hpx::get_error_config()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.


Function get_error_config

hpx::get_error_config — Return the HPX configuration information point from which the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


std::string get_error_config(hpx::exception const & e);

Description

The function hpx::get_error_config can be used to extract the HPX configuration information element representing the full HPX configuration information as stored in the given exception instance.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error(), hpx::get_error_state() hpx::get_error_what(), hpx::get_error_thread_description()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.

Returns:

Any additionally available HPX configuration information the point from which the exception was thrown. If the exception instance does not hold this information, the function will return an empty string.

Throws:

std::bad_alloc (if one of the required allocations fails)

Function get_error_config

hpx::get_error_config — Return the HPX configuration information point from which the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


std::string get_error_config(hpx::error_code const & e);

Description

The function hpx::get_error_config can be used to extract the HPX configuration information element representing the full HPX configuration information as stored in the given exception instance.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error(), hpx::get_error_state() hpx::get_error_what(), hpx::get_error_thread_description()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.


Function get_error_state

hpx::get_error_state — Return the HPX runtime state information at which the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


std::string get_error_state(hpx::exception const & e);

Description

The function hpx::get_error_state can be used to extract the HPX runtime state information element representing the state the runtime system is currently in as stored in the given exception instance.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error(), hpx::get_error_what(), hpx::get_error_thread_description()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.

Returns:

The point runtime state at the point at which the exception was thrown. If the exception instance does not hold this information, the function will return an empty string.

Throws:

std::bad_alloc (if one of the required allocations fails)

Function get_error_state

hpx::get_error_state — Return the HPX runtime state information at which the exception was thrown.

Synopsis

// In header: <hpx/exception.hpp>


std::string get_error_state(hpx::error_code const & e);

Description

The function hpx::get_error_state can be used to extract the HPX runtime state information element representing the state the runtime system is currently in as stored in the given exception instance.

See Also:

hpx::diagnostic_information(), hpx::get_error_host_name(), hpx::get_error_process_id(), hpx::get_error_function_name(), hpx::get_error_file_name(), hpx::get_error_line_number(), hpx::get_error_os_thread(), hpx::get_error_thread_id(), hpx::get_error_backtrace(), hpx::get_error_env(), hpx::get_error(), hpx::get_error_what(), hpx::get_error_thread_description()

Parameters:

e

The parameter e will be inspected for the requested diagnostic information elements which have been stored at the point where the exception was thrown. This parameter can be one of the following types: hpx::exception, hpx::error_code, boost::exception, or boost::exception_ptr.


Macro HPX_THROW_EXCEPTION

HPX_THROW_EXCEPTION — Throw a hpx::exception initialized from the given parameters.

Synopsis

// In header: <hpx/exception.hpp>

HPX_THROW_EXCEPTION(errcode, f, msg)

Description

The macro HPX_THROW_EXCEPTION can be used to throw a hpx::exception. The purpose of this macro is to prepend the source file name and line number of the position where the exception is thrown to the error message. Moreover, this associates additional diagnostic information with the exception, such as file name and line number, locality id and thread id, and stack backtrace from the point where the exception was thrown.

The parameter errcode holds the hpx::error code the new exception should encapsulate. The parameter f is expected to hold the name of the function exception is thrown from and the parameter msg holds the error message the new exception should encapsulate.

Example: 

void raise_exception()
{
    // Throw a hpx::exception initialized from the given parameters.
    // Additionally associate with this exception some detailed
    // diagnostic information about the throw-site.
    HPX_THROW_EXCEPTION(hpx::no_success, "raise_exception", "simulated error");
}


Macro HPX_THROWS_IF

HPX_THROWS_IF — Either throw a hpx::exception or initialize hpx::error_code from the given parameters.

Synopsis

// In header: <hpx/exception.hpp>

HPX_THROWS_IF(ec, errcode, f, msg)

Description

The macro HPX_THROWS_IF can be used to either throw a hpx::exception or to initialize a hpx::error_code from the given parameters. If &ec == &hpx::throws, the semantics of this macro are equivalent to HPX_THROW_EXCEPTION. If &ec != &hpx::throws, the hpx::error_code instance ec is initialized instead.

The parameter errcode holds the hpx::error code from which the new exception should be initialized. The parameter f is expected to hold the name of the function exception is thrown from and the parameter msg holds the error message the new exception should encapsulate.

Global throws

hpx::throws — Predefined error_code object used as "throw on error" tag.

Synopsis

// In header: <hpx/exception_fwd.hpp>

error_code throws;

Description

The predefined hpx::error_code object hpx::throws is supplied for use as a "throw on error" tag.

Functions that specify an argument in the form 'error_code& ec=throws' (with appropriate namespace qualifiers), have the following error handling semantics:

If &ec != &throws and an error occurred: ec.value() returns the implementation specific error number for the particular error that occurred and ec.category() returns the error_category for ec.value().

If &ec != &throws and an error did not occur, ec.clear().

If an error occurs and &ec == &throws, the function throws an exception of type hpx::exception or of a type derived from it. The exception's get_errorcode() member function returns a reference to an hpx::error_code object with the behavior as specified above.

namespace hpx {
  class exception_list;
}

Class exception_list

hpx::exception_list

Synopsis

// In header: <hpx/exception_list.hpp>


class exception_list : public hpx::exception {
public:
  // types
  typedef exception_list_type::const_iterator iterator;  // bidirectional iterator 

  // public member functions
  std::size_t size() const;
  exception_list_type::const_iterator begin() const;
  exception_list_type::const_iterator end() const;
};

Description

The class exception_list is a container of exception_ptr objects parallel algorithms may use to communicate uncaught exceptions encountered during parallel execution to the caller of the algorithm

The type exception_list::const_iterator fulfills the requirements of a forward iterator.

exception_list public member functions

  1. std::size_t size() const;

    The number of exception_ptr objects contained within the exception_list.

    [Note]Note

    Complexity: Constant time.

  2. exception_list_type::const_iterator begin() const;

    An iterator referring to the first exception_ptr object contained within the exception_list.

  3. exception_list_type::const_iterator end() const;
    An iterator which is the past-the-end value for the exception_list.
namespace hpx {
  int finalize(double, double = -1.0, error_code & = throws);
  int finalize(error_code & = throws);
  HPX_ATTRIBUTE_NORETURN void terminate();
  int disconnect(double, double = -1.0, error_code & = throws);
  int disconnect(error_code & = throws);
  int stop(error_code & = throws);
}

Function finalize

hpx::finalize — Main function to gracefully terminate the the HPX runtime system.

Synopsis

// In header: <hpx/hpx_finalize.hpp>


int finalize(double shutdown_timeout, double localwait = -1.0, 
             error_code & ec = throws);

Description

The function hpx::finalize is the main way to (gracefully) exit any HPX application. It should be called from one locality only (usually the console) and it will notify all connected localities to finish execution. Only after all other localities have exited this function will return, allowing to exit the console locality as well.

During the execution of this function the runtime system will invoke all registered shutdown functions (see hpx::init) on all localities.

The default value (-1.0) will try to find a globally set timeout value (can be set as the configuration parameter hpx.shutdown_timeout), and if that is not set or -1.0 as well, it will disable any timeout, each connected locality will wait for all existing HPX-threads to terminate.

The default value (-1.0) will try to find a globally set wait time value (can be set as the configuration parameter "hpx.finalize_wait_time"), and if this is not set or -1.0 as well, it will disable any addition local wait time before proceeding.

[Note]Note

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

This function will block and wait for all connected localities to exit before returning to the caller. It should be the last HPX-function called by any application.

Using this function is an alternative to hpx::disconnect, these functions do not need to be called both.

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

localwait

This parameter allows to specify a local wait time (in microseconds) before the connected localities will be notified and the overall shutdown process starts.

shutdown_timeout

This parameter allows to specify a timeout (in microseconds), specifying how long any of the connected localities should wait for pending tasks to be executed. After this timeout, all suspended HPX-threads will be aborted. Note, that this function will not abort any running HPX-threads. In any case the shutdown will not proceed as long as there is at least one pending/running HPX-thread.

Returns:

This function will always return zero.


Function finalize

hpx::finalize — Main function to gracefully terminate the the HPX runtime system.

Synopsis

// In header: <hpx/hpx_finalize.hpp>


int finalize(error_code & ec = throws);

Description

The function hpx::finalize is the main way to (gracefully) exit any HPX application. It should be called from one locality only (usually the console) and it will notify all connected localities to finish execution. Only after all other localities have exited this function will return, allowing to exit the console locality as well.

During the execution of this function the runtime system will invoke all registered shutdown functions (see hpx::init) on all localities.

[Note]Note

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

This function will block and wait for all connected localities to exit before returning to the caller. It should be the last HPX-function called by any application.

Using this function is an alternative to hpx::disconnect, these functions do not need to be called both.

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

Returns:

This function will always return zero.


Function terminate

hpx::terminate — Terminate any application non-gracefully.

Synopsis

// In header: <hpx/hpx_finalize.hpp>


HPX_ATTRIBUTE_NORETURN void terminate();

Description

The function hpx::terminate is the non-graceful way to exit any application immediately. It can be called from any locality and will terminate all localities currently used by the application.

[Note]Note

This function will cause HPX to call std::terminate() on all localities associated with this application. If the function is called not from an HPX thread it will fail and return an error using the argument ec.


Function disconnect

hpx::disconnect — Disconnect this locality from the application.

Synopsis

// In header: <hpx/hpx_finalize.hpp>


int disconnect(double shutdown_timeout, double localwait = -1.0, 
               error_code & ec = throws);

Description

The function hpx::disconnect can be used to disconnect a locality from a running HPX application.

During the execution of this function the runtime system will invoke all registered shutdown functions (see hpx::init) on this locality. The default value (-1.0) will try to find a globally set timeout value (can be set as the configuration parameter "hpx.shutdown_timeout"), and if that is not set or -1.0 as well, it will disable any timeout, each connected locality will wait for all existing HPX-threads to terminate.

The default value (-1.0) will try to find a globally set wait time value (can be set as the configuration parameter hpx.finalize_wait_time), and if this is not set or -1.0 as well, it will disable any addition local wait time before proceeding.

[Note]Note

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

This function will block and wait for this locality to finish executing before returning to the caller. It should be the last HPX-function called by any locality being disconnected.

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

localwait

This parameter allows to specify a local wait time (in microseconds) before the connected localities will be notified and the overall shutdown process starts.

shutdown_timeout

This parameter allows to specify a timeout (in microseconds), specifying how long this locality should wait for pending tasks to be executed. After this timeout, all suspended HPX-threads will be aborted. Note, that this function will not abort any running HPX-threads. In any case the shutdown will not proceed as long as there is at least one pending/running HPX-thread.

Returns:

This function will always return zero.


Function disconnect

hpx::disconnect — Disconnect this locality from the application.

Synopsis

// In header: <hpx/hpx_finalize.hpp>


int disconnect(error_code & ec = throws);

Description

The function hpx::disconnect can be used to disconnect a locality from a running HPX application.

During the execution of this function the runtime system will invoke all registered shutdown functions (see hpx::init) on this locality.

[Note]Note

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

This function will block and wait for this locality to finish executing before returning to the caller. It should be the last HPX-function called by any locality being disconnected.

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

Returns:

This function will always return zero.


Function stop

hpx::stop — Stop the runtime system.

Synopsis

// In header: <hpx/hpx_finalize.hpp>


int stop(error_code & ec = throws);

Description

This function will block and wait for this locality to finish executing before returning to the caller. It should be the last HPX-function called on every locality. This function should be used only if the runtime system was started using hpx::start.

Returns:

The function returns the value, which has been returned from the user supplied main HPX function (usually hpx_main).

namespace hpx {
  typedef util::function_nonser< void()> startup_function_type;
  typedef util::function_nonser< void()> shutdown_function_type;
  naming::id_type find_root_locality(error_code & = throws);
  std::vector< naming::id_type > find_all_localities(error_code & = throws);
  std::vector< naming::id_type > 
  find_all_localities(components::component_type, error_code & = throws);
  std::vector< naming::id_type > find_remote_localities(error_code & = throws);
  std::vector< naming::id_type > 
  find_remote_localities(components::component_type, error_code & = throws);
  naming::id_type 
  find_locality(components::component_type, error_code & = throws);
  boost::uint32_t get_num_localities_sync(error_code & = throws);
  boost::uint32_t get_initial_num_localities();
  lcos::future< boost::uint32_t > get_num_localities();
  boost::uint32_t 
  get_num_localities_sync(components::component_type, error_code & = throws);
  lcos::future< boost::uint32_t > 
  get_num_localities(components::component_type);
  void register_pre_startup_function(startup_function_type const &);
  void register_startup_function(startup_function_type const &);
  void register_pre_shutdown_function(shutdown_function_type const &);
  void register_shutdown_function(shutdown_function_type const &);
  bool is_starting();
  bool is_running();
  bool is_stopped();
  bool is_stopped_or_shutting_down();
  std::string get_thread_name();
  std::size_t get_num_worker_threads();
  boost::uint64_t get_system_uptime();
  naming::id_type 
  get_colocation_id_sync(naming::id_type const &, error_code & = throws);
  lcos::future< naming::id_type > get_colocation_id(naming::id_type const &);
  void start_active_counters(error_code & = throws);
  void reset_active_counters(error_code & = throws);
  void stop_active_counters(error_code & = throws);
  void evaluate_active_counters(bool = false, char const * = 0, 
                                error_code & = throws);
  parcelset::policies::message_handler * 
  create_message_handler(char const *, char const *, parcelset::parcelport *, 
                         std::size_t, std::size_t, error_code & = throws);
  serialization::binary_filter * 
  create_binary_filter(char const *, bool, serialization::binary_filter * = 0, 
                       error_code & = throws);
}

Type definition startup_function_type

startup_function_type

Synopsis

// In header: <hpx/hpx_fwd.hpp>


typedef util::function_nonser< void()> startup_function_type;

Description

The type of a function which is registered to be executed as a startup or pre-startup function.


Type definition shutdown_function_type

shutdown_function_type

Synopsis

// In header: <hpx/hpx_fwd.hpp>


typedef util::function_nonser< void()> shutdown_function_type;

Description

The type of a function which is registered to be executed as a shutdown or pre-shutdown function.


Function find_root_locality

hpx::find_root_locality — Return the global id representing the root locality.

Synopsis

// In header: <hpx/hpx_fwd.hpp>


naming::id_type find_root_locality(error_code & ec = throws);

Description

The function find_root_locality() can be used to retrieve the global id usable to refer to the root locality. The root locality is the locality where the main AGAS service is hosted.

[Note]Note

Generally, the id of a locality can be used for instance to create new instances of components and to invoke plain actions (global functions).

[Note]Note

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

This function will return meaningful results only if called from an HPX-thread. It will return hpx::naming::invalid_id otherwise.

See Also:

hpx::find_all_localities(), hpx::find_locality()

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

Returns:

The global id representing the root locality for this application.


Function find_all_localities

hpx::find_all_localities — Return the list of global ids representing all localities available to this application.

Synopsis

// In header: <hpx/hpx_fwd.hpp>


std::vector< naming::id_type > find_all_localities(error_code & ec = throws);

Description

The function find_all_localities() can be used to retrieve the global ids of all localities currently available to this application.

[Note]Note

Generally, the id of a locality can be used for instance to create new instances of components and to invoke plain actions (global functions).

[Note]Note

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

This function will return meaningful results only if called from an HPX-thread. It will return an empty vector otherwise.

See Also:

hpx::find_here(), hpx::find_locality()

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

Returns:

The global ids representing the localities currently available to this application.


Function find_all_localities

hpx::find_all_localities — Return the list of global ids representing all localities available to this application which support the given component type.

Synopsis

// In header: <hpx/hpx_fwd.hpp>


std::vector< naming::id_type > 
find_all_localities(components::component_type type, error_code & ec = throws);

Description

The function find_all_localities() can be used to retrieve the global ids of all localities currently available to this application which support the creation of instances of the given component type.

[Note]Note

Generally, the id of a locality can be used for instance to create new instances of components and to invoke plain actions (global functions).

[Note]Note

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

This function will return meaningful results only if called from an HPX-thread. It will return an empty vector otherwise.

See Also:

hpx::find_here(), hpx::find_locality()

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

type

[in] The type of the components for which the function should return the available localities.

Returns:

The global ids representing the localities currently available to this application which support the creation of instances of the given component type. If no localities supporting the given component type are currently available, this function will return an empty vector.


Function find_remote_localities

hpx::find_remote_localities — Return the list of locality ids of remote localities supporting the given component type. By default this function will return the list of all remote localities (all but the current locality).

Synopsis

// In header: <hpx/hpx_fwd.hpp>


std::vector< naming::id_type > 
find_remote_localities(error_code & ec = throws);

Description

The function find_remote_localities() can be used to retrieve the global ids of all remote localities currently available to this application (i.e. all localities except the current one).

[Note]Note

Generally, the id of a locality can be used for instance to create new instances of components and to invoke plain actions (global functions).

[Note]Note

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

This function will return meaningful results only if called from an HPX-thread. It will return an empty vector otherwise.

See Also:

hpx::find_here(), hpx::find_locality()

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

Returns:

The global ids representing the remote localities currently available to this application.


Function find_remote_localities

hpx::find_remote_localities — Return the list of locality ids of remote localities supporting the given component type. By default this function will return the list of all remote localities (all but the current locality).

Synopsis

// In header: <hpx/hpx_fwd.hpp>


std::vector< naming::id_type > 
find_remote_localities(components::component_type type, 
                       error_code & ec = throws);

Description

The function find_remote_localities() can be used to retrieve the global ids of all remote localities currently available to this application (i.e. all localities except the current one) which support the creation of instances of the given component type.

[Note]Note

Generally, the id of a locality can be used for instance to create new instances of components and to invoke plain actions (global functions).

[Note]Note

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

This function will return meaningful results only if called from an HPX-thread. It will return an empty vector otherwise.

See Also:

hpx::find_here(), hpx::find_locality()

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

type

[in] The type of the components for which the function should return the available remote localities.

Returns:

The global ids representing the remote localities currently available to this application.


Function find_locality

hpx::find_locality — Return the global id representing an arbitrary locality which supports the given component type.

Synopsis

// In header: <hpx/hpx_fwd.hpp>


naming::id_type 
find_locality(components::component_type type, error_code & ec = throws);

Description

The function find_locality() can be used to retrieve the global id of an arbitrary locality currently available to this application which supports the creation of instances of the given component type.

[Note]Note

Generally, the id of a locality can be used for instance to create new instances of components and to invoke plain actions (global functions).

[Note]Note

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

This function will return meaningful results only if called from an HPX-thread. It will return hpx::naming::invalid_id otherwise.

See Also:

hpx::find_here(), hpx::find_all_localities()

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

type

[in] The type of the components for which the function should return any available locality.

Returns:

The global id representing an arbitrary locality currently available to this application which supports the creation of instances of the given component type. If no locality supporting the given component type is currently available, this function will return hpx::naming::invalid_id.


Function get_num_localities_sync

hpx::get_num_localities_sync — Return the number of localities which are currently registered for the running application.

Synopsis

// In header: <hpx/hpx_fwd.hpp>


boost::uint32_t get_num_localities_sync(error_code & ec = throws);

Description

The function get_num_localities returns the number of localities currently connected to the console.

[Note]Note

This function will return meaningful results only if called from an HPX-thread. It will return 0 otherwise.

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

See Also:

hpx::find_all_localities_sync, hpx::get_num_localities


Function get_initial_num_localities

hpx::get_initial_num_localities — Return the number of localities which were registered at startup for the running application.

Synopsis

// In header: <hpx/hpx_fwd.hpp>


boost::uint32_t get_initial_num_localities();

Description

The function get_initial_num_localities returns the number of localities which were connected to the console at application startup.

[Note]Note

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

See Also:

hpx::find_all_localities, hpx::get_num_localities


Function get_num_localities

hpx::get_num_localities — Asynchronously return the number of localities which are currently registered for the running application.

Synopsis

// In header: <hpx/hpx_fwd.hpp>


lcos::future< boost::uint32_t > get_num_localities();

Description

The function get_num_localities asynchronously returns the number of localities currently connected to the console. The returned future represents the actual result.

[Note]Note

This function will return meaningful results only if called from an HPX-thread. It will return 0 otherwise.

See Also:

hpx::find_all_localities, hpx::get_num_localities


Function get_num_localities_sync

hpx::get_num_localities_sync — Return the number of localities which are currently registered for the running application.

Synopsis

// In header: <hpx/hpx_fwd.hpp>


boost::uint32_t 
get_num_localities_sync(components::component_type t, 
                        error_code & ec = throws);

Description

The function get_num_localities returns the number of localities currently connected to the console which support the creation of the given component type.

[Note]Note

This function will return meaningful results only if called from an HPX-thread. It will return 0 otherwise.

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

See Also:

hpx::find_all_localities, hpx::get_num_localities

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

t

The component type for which the number of connected localities should be retrieved.


Function get_num_localities

hpx::get_num_localities — Asynchronously return the number of localities which are currently registered for the running application.

Synopsis

// In header: <hpx/hpx_fwd.hpp>


lcos::future< boost::uint32_t > 
get_num_localities(components::component_type t);

Description

The function get_num_localities asynchronously returns the number of localities currently connected to the console which support the creation of the given component type. The returned future represents the actual result.

[Note]Note

This function will return meaningful results only if called from an HPX-thread. It will return 0 otherwise.

See Also:

hpx::find_all_localities, hpx::get_num_localities

Parameters:

t

The component type for which the number of connected localities should be retrieved.


Function register_pre_startup_function

hpx::register_pre_startup_function — Add a function to be executed by a HPX thread before hpx_main but guaranteed before any startup function is executed (system-wide).

Synopsis

// In header: <hpx/hpx_fwd.hpp>


void register_pre_startup_function(startup_function_type const & f);

Description

Any of the functions registered with register_pre_startup_function are guaranteed to be executed by an HPX thread before any of the registered startup functions are executed (see hpx::register_startup_function()).

[Note]Note

If this function is called while the pre-startup functions are being executed or after that point, it will raise a invalid_status exception.

This function is one of the few API functions which can be called before the runtime system has been fully initialized. It will automatically stage the provided startup function to the runtime system during its initialization (if necessary).

See Also:

hpx::register_startup_function()

Parameters:

f

[in] The function to be registered to run by an HPX thread as a pre-startup function.


Function register_startup_function

hpx::register_startup_function — Add a function to be executed by a HPX thread before hpx_main but guaranteed after any pre-startup function is executed (system-wide).

Synopsis

// In header: <hpx/hpx_fwd.hpp>


void register_startup_function(startup_function_type const & f);

Description

Any of the functions registered with register_startup_function are guaranteed to be executed by an HPX thread after any of the registered pre-startup functions are executed (see: hpx::register_pre_startup_function()), but before hpx_main is being called.

[Note]Note

If this function is called while the startup functions are being executed or after that point, it will raise a invalid_status exception.

This function is one of the few API functions which can be called before the runtime system has been fully initialized. It will automatically stage the provided startup function to the runtime system during its initialization (if necessary).

See Also:

hpx::register_pre_startup_function()

Parameters:

f

[in] The function to be registered to run by an HPX thread as a startup function.


Function register_pre_shutdown_function

hpx::register_pre_shutdown_function — Add a function to be executed by a HPX thread during hpx::finalize() but guaranteed before any shutdown function is executed (system-wide)

Synopsis

// In header: <hpx/hpx_fwd.hpp>


void register_pre_shutdown_function(shutdown_function_type const & f);

Description

Any of the functions registered with register_pre_shutdown_function are guaranteed to be executed by an HPX thread during the execution of hpx::finalize() before any of the registered shutdown functions are executed (see: hpx::register_shutdown_function()).

[Note]Note

If this function is called while the pre-shutdown functions are being executed, or after that point, it will raise a invalid_status exception.

See Also:

hpx::register_shutdown_function()

Parameters:

f

[in] The function to be registered to run by an HPX thread as a pre-shutdown function.


Function register_shutdown_function

hpx::register_shutdown_function — Add a function to be executed by a HPX thread during hpx::finalize() but guaranteed after any pre-shutdown function is executed (system-wide)

Synopsis

// In header: <hpx/hpx_fwd.hpp>


void register_shutdown_function(shutdown_function_type const & f);

Description

Any of the functions registered with register_shutdown_function are guaranteed to be executed by an HPX thread during the execution of hpx::finalize() after any of the registered pre-shutdown functions are executed (see: hpx::register_pre_shutdown_function()).

[Note]Note

If this function is called while the shutdown functions are being executed, or after that point, it will raise a invalid_status exception.

See Also:

hpx::register_pre_shutdown_function()

Parameters:

f

[in] The function to be registered to run by an HPX thread as a shutdown function.


Function is_starting

hpx::is_starting — Test whether the runtime system is currently being started.

Synopsis

// In header: <hpx/hpx_fwd.hpp>


bool is_starting();

Description

This function returns whether the runtime system is currently being started or not, e.g. whether the current state of the runtime system is hpx::state_startup

[Note]Note

This function needs to be executed on a HPX-thread. It will return false otherwise.


Function is_running

hpx::is_running — Test whether the runtime system is currently running.

Synopsis

// In header: <hpx/hpx_fwd.hpp>


bool is_running();

Description

This function returns whether the runtime system is currently running or not, e.g. whether the current state of the runtime system is hpx::state_running

[Note]Note

This function needs to be executed on a HPX-thread. It will return false otherwise.


Function is_stopped

hpx::is_stopped — Test whether the runtime system is currently stopped.

Synopsis

// In header: <hpx/hpx_fwd.hpp>


bool is_stopped();

Description

This function returns whether the runtime system is currently stopped or not, e.g. whether the current state of the runtime system is hpx::state_stopped

[Note]Note

This function needs to be executed on a HPX-thread. It will return false otherwise.


Function is_stopped_or_shutting_down

hpx::is_stopped_or_shutting_down — Test whether the runtime system is currently being shut down.

Synopsis

// In header: <hpx/hpx_fwd.hpp>


bool is_stopped_or_shutting_down();

Description

This function returns whether the runtime system is currently being shut down or not, e.g. whether the current state of the runtime system is hpx::state_stopped or hpx::state_shutdown

[Note]Note

This function needs to be executed on a HPX-thread. It will return false otherwise.


Function get_thread_name

hpx::get_thread_name — Return the name of the calling thread.

Synopsis

// In header: <hpx/hpx_fwd.hpp>


std::string get_thread_name();

Description

This function returns the name of the calling thread. This name uniquely identifies the thread in the context of HPX. If the function is called while no HPX runtime system is active, the result will be "<unknown>".


Function get_num_worker_threads

hpx::get_num_worker_threads — Return the number of worker OS- threads used to execute HPX threads.

Synopsis

// In header: <hpx/hpx_fwd.hpp>


std::size_t get_num_worker_threads();

Description

This function returns the number of OS-threads used to execute HPX threads. If the function is called while no HPX runtime system is active, it will return zero.


Function get_system_uptime

hpx::get_system_uptime — Return the system uptime measure on the thread executing this call.

Synopsis

// In header: <hpx/hpx_fwd.hpp>


boost::uint64_t get_system_uptime();

Description

This function returns the system uptime measured in nanoseconds for the thread executing this call. If the function is called while no HPX runtime system is active, it will return zero.


Function get_colocation_id_sync

hpx::get_colocation_id_sync — Return the id of the locality where the object referenced by the given id is currently located on.

Synopsis

// In header: <hpx/hpx_fwd.hpp>


naming::id_type 
get_colocation_id_sync(naming::id_type const & id, error_code & ec = throws);

Description

The function hpx::get_colocation_id() returns the id of the locality where the given object is currently located.

[Note]Note

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

See Also:

hpx::get_colocation_id()

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

id

[in] The id of the object to locate.


Function get_colocation_id

hpx::get_colocation_id — Asynchronously return the id of the locality where the object referenced by the given id is currently located on.

Synopsis

// In header: <hpx/hpx_fwd.hpp>


lcos::future< naming::id_type > get_colocation_id(naming::id_type const & id);

Description

See Also:

hpx::get_colocation_id_sync()


Function start_active_counters

hpx::start_active_counters — Start all active performance counters, optionally naming the section of code.

Synopsis

// In header: <hpx/hpx_fwd.hpp>


void start_active_counters(error_code & ec = throws);

Description

[Note]Note

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

The active counters are those which have been specified on the command line while executing the application (see command line option --hpx:print-counter)

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.


Function reset_active_counters

hpx::reset_active_counters — Resets all active performance counters.

Synopsis

// In header: <hpx/hpx_fwd.hpp>


void reset_active_counters(error_code & ec = throws);

Description

[Note]Note

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

The active counters are those which have been specified on the command line while executing the application (see command line option --hpx:print-counter)

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.


Function stop_active_counters

hpx::stop_active_counters — Stop all active performance counters.

Synopsis

// In header: <hpx/hpx_fwd.hpp>


void stop_active_counters(error_code & ec = throws);

Description

[Note]Note

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

The active counters are those which have been specified on the command line while executing the application (see command line option --hpx:print-counter)

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.


Function evaluate_active_counters

hpx::evaluate_active_counters — Evaluate and output all active performance counters, optionally naming the point in code marked by this function.

Synopsis

// In header: <hpx/hpx_fwd.hpp>


void evaluate_active_counters(bool reset = false, 
                              char const * description = 0, 
                              error_code & ec = throws);

Description

[Note]Note

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

The output generated by this function is redirected to the destination specified by the corresponding command line options (see --hpx:print-counter-destination).

The active counters are those which have been specified on the command line while executing the application (see command line option --hpx:print-counter)

Parameters:

description

[in] this is an optional value naming the point in the code marked by the call to this function.

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

reset

[in] this is an optional flag allowing to reset the counter value after it has been evaluated.


Function create_message_handler

hpx::create_message_handler — Create an instance of a message handler plugin.

Synopsis

// In header: <hpx/hpx_fwd.hpp>


parcelset::policies::message_handler * 
create_message_handler(char const * message_handler_type, char const * action, 
                       parcelset::parcelport * pp, std::size_t num_messages, 
                       std::size_t interval, error_code & ec = throws);

Description

The function hpx::create_message_handler() creates an instance of a message handler plugin based on the parameters specified.

[Note]Note

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

Parameters:

action

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

interval

message_handler_type

num_messages

pp


Function create_binary_filter

hpx::create_binary_filter — Create an instance of a binary filter plugin.

Synopsis

// In header: <hpx/hpx_fwd.hpp>


serialization::binary_filter * 
create_binary_filter(char const * binary_filter_type, bool compress, 
                     serialization::binary_filter * next_filter = 0, 
                     error_code & ec = throws);

Description

[Note]Note

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

namespace hpx_startup {
}namespace hpx {
  int init(util::function_nonser< int(boost::program_options::variables_map &vm) > const &, 
           boost::program_options::options_description const &, int, char **, 
           std::vector< std::string > const &, 
           util::function_nonser< void()> const & = util::function_nonser< void()>(), 
           util::function_nonser< void()> const & = util::function_nonser< void()>(), 
           hpx::runtime_mode = hpx::runtime_mode_default);
  int init(int(*)(boost::program_options::variables_map &vm), 
           boost::program_options::options_description const &, int, char **, 
           util::function_nonser< void()> const & = util::function_nonser< void()>(), 
           util::function_nonser< void()> const & = util::function_nonser< void()>(), 
           hpx::runtime_mode = hpx::runtime_mode_default);
  int init(boost::program_options::options_description const &, int, char **, 
           util::function_nonser< void()> const & = util::function_nonser< void()>(), 
           util::function_nonser< void()> const & = util::function_nonser< void()>(), 
           hpx::runtime_mode = hpx::runtime_mode_default);
  int init(boost::program_options::options_description const &, int, char **, 
           std::vector< std::string > const &, 
           util::function_nonser< void()> const & = util::function_nonser< void()>(), 
           util::function_nonser< void()> const & = util::function_nonser< void()>(), 
           hpx::runtime_mode = hpx::runtime_mode_default);
  int init(int, char **, std::vector< std::string > const &, 
           hpx::runtime_mode = hpx::runtime_mode_default);
  int init(boost::program_options::options_description const &, int, char **, 
           hpx::runtime_mode);
  int init(std::string const &, int = 0, char ** = 0, 
           hpx::runtime_mode = hpx::runtime_mode_default);
  int init(int = 0, char ** = 0, 
           hpx::runtime_mode = hpx::runtime_mode_default);
  int init(std::vector< std::string > const &, 
           hpx::runtime_mode = hpx::runtime_mode_default);
  int init(int(*)(boost::program_options::variables_map &vm), 
           std::string const &, int, char **, 
           hpx::runtime_mode = hpx::runtime_mode_default);
  int init(int(*)(boost::program_options::variables_map &vm), int, char **, 
           hpx::runtime_mode = hpx::runtime_mode_default);
  int init(util::function_nonser< int(int, char **)> const &, 
           std::string const &, int, char **, 
           hpx::runtime_mode = hpx::runtime_mode_default);
  int init(util::function_nonser< int(int, char **)> const &, int, char **, 
           hpx::runtime_mode = hpx::runtime_mode_default);
}

Function init

hpx::init — Main entry point for launching the HPX runtime system.

Synopsis

// In header: <hpx/hpx_init.hpp>


int init(util::function_nonser< int(boost::program_options::variables_map &vm) > const & f, 
         boost::program_options::options_description const & desc_cmdline, 
         int argc, char ** argv, std::vector< std::string > const & cfg, 
         util::function_nonser< void()> const & startup = util::function_nonser< void()>(), 
         util::function_nonser< void()> const & shutdown = util::function_nonser< void()>(), 
         hpx::runtime_mode mode = hpx::runtime_mode_default);

Description

This is the main entry point for any HPX application. This function (or one of its overloads below) should be called from the users main() function. It will set up the HPX runtime environment and schedule the function given by f as a HPX thread.

[Note]Note

If the parameter mode is not given (defaulted), the created runtime system instance will be executed in console or worker mode depending on the command line arguments passed in argc/argv. Otherwise it will be executed as specified by the parametermode.

Parameters:

argc

[in] The number of command line arguments passed in argv. This is usually the unchanged value as passed by the operating system (to main()).

argv

[in] The command line arguments for this application, usually that is the value as passed by the operating system (to main()).

cfg

A list of configuration settings which will be added to the system configuration before the runtime instance is run. Each of the entries in this list must have the format of a fully defined key/value pair from an ini-file (for instance 'hpx.component.enabled=1')

desc_cmdline

[in] This parameter may hold the description of additional command line arguments understood by the application. These options will be prepended to the default command line options understood by hpx::init (see description below).

f

[in] The function to be scheduled as an HPX thread. Usually this function represents the main entry point of any HPX application.

mode

[in] The mode the created runtime environment should be initialized in. There has to be exactly one locality in each HPX application which is executed in console mode (hpx::runtime_mode_console), all other localities have to be run in worker mode (hpx::runtime_mode_worker). Normally this is set up automatically, but sometimes it is necessary to explicitly specify the mode.

shutdown

[in] A function to be executed inside an HPX thread while hpx::finalize is executed. If this parameter is not given no function will be executed.

startup

[in] A function to be executed inside a HPX thread before f is called. If this parameter is not given no function will be executed.

Returns:

The function returns the value, which has been returned from the user supplied f.


Function init

hpx::init — Main entry point for launching the HPX runtime system.

Synopsis

// In header: <hpx/hpx_init.hpp>


int init(int(*)(boost::program_options::variables_map &vm) f, 
         boost::program_options::options_description const & desc_cmdline, 
         int argc, char ** argv, 
         util::function_nonser< void()> const & startup = util::function_nonser< void()>(), 
         util::function_nonser< void()> const & shutdown = util::function_nonser< void()>(), 
         hpx::runtime_mode mode = hpx::runtime_mode_default);

Description

This is the main entry point for any HPX application. This function (or one of its overloads below) should be called from the users main() function. It will set up the HPX runtime environment and schedule the function given by f as a HPX thread.

[Note]Note

If the parameter mode is not given (defaulted), the created runtime system instance will be executed in console or worker mode depending on the command line arguments passed in argc/argv. Otherwise it will be executed as specified by the parametermode.

Parameters:

argc

[in] The number of command line arguments passed in argv. This is usually the unchanged value as passed by the operating system (to main()).

argv

[in] The command line arguments for this application, usually that is the value as passed by the operating system (to main()).

desc_cmdline

[in] This parameter may hold the description of additional command line arguments understood by the application. These options will be prepended to the default command line options understood by hpx::init (see description below).

f

[in] The function to be scheduled as an HPX thread. Usually this function represents the main entry point of any HPX application.

mode

[in] The mode the created runtime environment should be initialized in. There has to be exactly one locality in each HPX application which is executed in console mode (hpx::runtime_mode_console), all other localities have to be run in worker mode (hpx::runtime_mode_worker). Normally this is set up automatically, but sometimes it is necessary to explicitly specify the mode.

shutdown

[in] A function to be executed inside an HPX thread while hpx::finalize is executed. If this parameter is not given no function will be executed.

startup

[in] A function to be executed inside a HPX thread before f is called. If this parameter is not given no function will be executed.

Returns:

The function returns the value, which has been returned from the user supplied f.


Function init

hpx::init — Main entry point for launching the HPX runtime system.

Synopsis

// In header: <hpx/hpx_init.hpp>


int init(boost::program_options::options_description const & desc_cmdline, 
         int argc, char ** argv, 
         util::function_nonser< void()> const & startup = util::function_nonser< void()>(), 
         util::function_nonser< void()> const & shutdown = util::function_nonser< void()>(), 
         hpx::runtime_mode mode = hpx::runtime_mode_default);

Description

This is a simplified main entry point, which can be used to set up the runtime for an HPX application (the runtime system will be set up in console mode or worker mode depending on the command line settings).

In console mode it will execute the user supplied function hpx_main, in worker mode it will execute an empty hpx_main.

[Note]Note

If the parameter mode is not given (defaulted), the created runtime system instance will be executed in console or worker mode depending on the command line arguments passed in argc/argv. Otherwise it will be executed as specified by the parametermode.

Parameters:

argc

[in] The number of command line arguments passed in argv. This is usually the unchanged value as passed by the operating system (to main()).

argv

[in] The command line arguments for this application, usually that is the value as passed by the operating system (to main()).

desc_cmdline

[in] This parameter may hold the description of additional command line arguments understood by the application. These options will be prepended to the default command line options understood by hpx::init (see description below).

mode

[in] The mode the created runtime environment should be initialized in. There has to be exactly one locality in each HPX application which is executed in console mode (hpx::runtime_mode_console), all other localities have to be run in worker mode (hpx::runtime_mode_worker). Normally this is set up automatically, but sometimes it is necessary to explicitly specify the mode.

shutdown

[in] A function to be executed inside an HPX thread while hpx::finalize is executed. If this parameter is not given no function will be executed.

startup

[in] A function to be executed inside a HPX thread before f is called. If this parameter is not given no function will be executed.

Returns:

The function returns the value, which has been returned from hpx_main (or 0 when executed in worker mode).


Function init

hpx::init — Main entry point for launching the HPX runtime system.

Synopsis

// In header: <hpx/hpx_init.hpp>


int init(boost::program_options::options_description const & desc_cmdline, 
         int argc, char ** argv, std::vector< std::string > const & cfg, 
         util::function_nonser< void()> const & startup = util::function_nonser< void()>(), 
         util::function_nonser< void()> const & shutdown = util::function_nonser< void()>(), 
         hpx::runtime_mode mode = hpx::runtime_mode_default);

Description

This is a simplified main entry point, which can be used to set up the runtime for an HPX application (the runtime system will be set up in console mode or worker mode depending on the command line settings).

In console mode it will execute the user supplied function hpx_main, in worker mode it will execute an empty hpx_main.

[Note]Note

If the parameter mode is not given (defaulted), the created runtime system instance will be executed in console or worker mode depending on the command line arguments passed in argc/argv. Otherwise it will be executed as specified by the parametermode.

Parameters:

argc

[in] The number of command line arguments passed in argv. This is usually the unchanged value as passed by the operating system (to main()).

argv

[in] The command line arguments for this application, usually that is the value as passed by the operating system (to main()).

cfg

A list of configuration settings which will be added to the system configuration before the runtime instance is run. Each of the entries in this list must have the format of a fully defined key/value pair from an ini-file (for instance 'hpx.component.enabled=1')

desc_cmdline

[in] This parameter may hold the description of additional command line arguments understood by the application. These options will be prepended to the default command line options understood by hpx::init (see description below).

mode

[in] The mode the created runtime environment should be initialized in. There has to be exactly one locality in each HPX application which is executed in console mode (hpx::runtime_mode_console), all other localities have to be run in worker mode (hpx::runtime_mode_worker). Normally this is set up automatically, but sometimes it is necessary to explicitly specify the mode.

shutdown

[in] A function to be executed inside an HPX thread while hpx::finalize is executed. If this parameter is not given no function will be executed.

startup

[in] A function to be executed inside a HPX thread before f is called. If this parameter is not given no function will be executed.

Returns:

The function returns the value, which has been returned from hpx_main (or 0 when executed in worker mode).


Function init

hpx::init — Main entry point for launching the HPX runtime system.

Synopsis

// In header: <hpx/hpx_init.hpp>


int init(int argc, char ** argv, std::vector< std::string > const & cfg, 
         hpx::runtime_mode mode = hpx::runtime_mode_default);

Description

This is a simplified main entry point, which can be used to set up the runtime for an HPX application (the runtime system will be set up in console mode or worker mode depending on the command line settings).

In console mode it will execute the user supplied function hpx_main, in worker mode it will execute an empty hpx_main.

[Note]Note

The created runtime system instance will be executed in console or worker mode depending on the command line arguments passed in argc/argv.

Parameters:

argc

[in] The number of command line arguments passed in argv. This is usually the unchanged value as passed by the operating system (to main()).

argv

[in] The command line arguments for this application, usually that is the value as passed by the operating system (to main()).

cfg

A list of configuration settings which will be added to the system configuration before the runtime instance is run. Each of the entries in this list must have the format of a fully defined key/value pair from an ini-file (for instance 'hpx.component.enabled=1')

mode

[in] The mode the created runtime environment should be initialized in. There has to be exactly one locality in each HPX application which is executed in console mode (hpx::runtime_mode_console), all other localities have to be run in worker mode (hpx::runtime_mode_worker). Normally this is set up automatically, but sometimes it is necessary to explicitly specify the mode.

Returns:

The function returns the value, which has been returned from hpx_main (or 0 when executed in worker mode).


Function init

hpx::init — Main entry point for launching the HPX runtime system.

Synopsis

// In header: <hpx/hpx_init.hpp>


int init(boost::program_options::options_description const & desc_cmdline, 
         int argc, char ** argv, hpx::runtime_mode mode);

Description

This is a simplified main entry point, which can be used to set up the runtime for an HPX application (the runtime system will be set up in console mode or worker mode depending on the command line settings).

In console mode it will execute the user supplied function hpx_main, in worker mode it will execute an empty hpx_main.

[Note]Note

If the parameter mode is runtime_mode_default, the created runtime system instance will be executed in console or worker mode depending on the command line arguments passed in argc/argv. Otherwise it will be executed as specified by the parametermode.

Parameters:

argc

[in] The number of command line arguments passed in argv. This is usually the unchanged value as passed by the operating system (to main()).

argv

[in] The command line arguments for this application, usually that is the value as passed by the operating system (to main()).

desc_cmdline

[in] This parameter may hold the description of additional command line arguments understood by the application. These options will be prepended to the default command line options understood by hpx::init (see description below).

mode

[in] The mode the created runtime environment should be initialized in. There has to be exactly one locality in each HPX application which is executed in console mode (hpx::runtime_mode_console), all other localities have to be run in worker mode (hpx::runtime_mode_worker). Normally this is set up automatically, but sometimes it is necessary to explicitly specify the mode.

Returns:

The function returns the value, which has been returned from hpx_main (or 0 when executed in worker mode).


Function init

hpx::init — Main entry point for launching the HPX runtime system.

Synopsis

// In header: <hpx/hpx_init.hpp>


int init(std::string const & app_name, int argc = 0, char ** argv = 0, 
         hpx::runtime_mode mode = hpx::runtime_mode_default);

Description

This is a simplified main entry point, which can be used to set up the runtime for an HPX application (the runtime system will be set up in console mode or worker mode depending on the command line settings).

[Note]Note

The created runtime system instance will be executed in console or worker mode depending on the command line arguments passed in argc/argv.

Parameters:

app_name

[in] The name of the application.

argc

[in] The number of command line arguments passed in argv. This is usually the unchanged value as passed by the operating system (to main()).

argv

[in] The command line arguments for this application, usually that is the value as passed by the operating system (to main()).

mode

[in] The mode the created runtime environment should be initialized in. There has to be exactly one locality in each HPX application which is executed in console mode (hpx::runtime_mode_console), all other localities have to be run in worker mode (hpx::runtime_mode_worker). Normally this is set up automatically, but sometimes it is necessary to explicitly specify the mode.

Returns:

The function returns the value, which has been returned from hpx_main (or 0 when executed in worker mode).


Function init

hpx::init — Main entry point for launching the HPX runtime system.

Synopsis

// In header: <hpx/hpx_init.hpp>


int init(int argc = 0, char ** argv = 0, 
         hpx::runtime_mode mode = hpx::runtime_mode_default);

Description

This is a simplified main entry point, which can be used to set up the runtime for an HPX application (the runtime system will be set up in console mode or worker mode depending on the command line settings).

[Note]Note

The created runtime system instance will be executed in console or worker mode depending on the command line arguments passed in argc/argv. If not command line arguments are passed, console mode is assumed.

If no command line arguments are passed the HPX runtime system will not support any of the default command line options as described in the section 'HPX Command Line Options'.

Parameters:

argc

[in] The number of command line arguments passed in argv. This is usually the unchanged value as passed by the operating system (to main()).

argv

[in] The command line arguments for this application, usually that is the value as passed by the operating system (to main()).

mode

[in] The mode the created runtime environment should be initialized in. There has to be exactly one locality in each HPX application which is executed in console mode (hpx::runtime_mode_console), all other localities have to be run in worker mode (hpx::runtime_mode_worker). Normally this is set up automatically, but sometimes it is necessary to explicitly specify the mode.

Returns:

The function returns the value, which has been returned from hpx_main (or 0 when executed in worker mode).


Function init

hpx::init — Main entry point for launching the HPX runtime system.

Synopsis

// In header: <hpx/hpx_init.hpp>


int init(std::vector< std::string > const & cfg, 
         hpx::runtime_mode mode = hpx::runtime_mode_default);

Description

This is a simplified main entry point, which can be used to set up the runtime for an HPX application (the runtime system will be set up in console mode or worker mode depending on the command line settings).

[Note]Note

The created runtime system instance will be executed in console or worker mode depending on the command line arguments passed in argc/argv. If not command line arguments are passed, console mode is assumed.

If no command line arguments are passed the HPX runtime system will not support any of the default command line options as described in the section 'HPX Command Line Options'.

Parameters:

cfg

A list of configuration settings which will be added to the system configuration before the runtime instance is run. Each of the entries in this list must have the format of a fully defined key/value pair from an ini-file (for instance 'hpx.component.enabled=1')

mode

[in] The mode the created runtime environment should be initialized in. There has to be exactly one locality in each HPX application which is executed in console mode (hpx::runtime_mode_console), all other localities have to be run in worker mode (hpx::runtime_mode_worker). Normally this is set up automatically, but sometimes it is necessary to explicitly specify the mode.

Returns:

The function returns the value, which has been returned from hpx_main (or 0 when executed in worker mode).


Function init

hpx::init — Main entry point for launching the HPX runtime system.

Synopsis

// In header: <hpx/hpx_init.hpp>


int init(int(*)(boost::program_options::variables_map &vm) f, 
         std::string const & app_name, int argc, char ** argv, 
         hpx::runtime_mode mode = hpx::runtime_mode_default);

Description

This is a simplified main entry point, which can be used to set up the runtime for an HPX application (the runtime system will be set up in console mode or worker mode depending on the command line settings).

[Note]Note

The created runtime system instance will be executed in console or worker mode depending on the command line arguments passed in argc/argv.

Parameters:

app_name

[in] The name of the application.

argc

[in] The number of command line arguments passed in argv. This is usually the unchanged value as passed by the operating system (to main()).

argv

[in] The command line arguments for this application, usually that is the value as passed by the operating system (to main()).

f

[in] The function to be scheduled as an HPX thread. Usually this function represents the main entry point of any HPX application.

mode

[in] The mode the created runtime environment should be initialized in. There has to be exactly one locality in each HPX application which is executed in console mode (hpx::runtime_mode_console), all other localities have to be run in worker mode (hpx::runtime_mode_worker). Normally this is set up automatically, but sometimes it is necessary to explicitly specify the mode.

Returns:

The function returns the value, which has been returned from the user supplied function f.


Function init

hpx::init — Main entry point for launching the HPX runtime system.

Synopsis

// In header: <hpx/hpx_init.hpp>


int init(int(*)(boost::program_options::variables_map &vm) f, int argc, 
         char ** argv, hpx::runtime_mode mode = hpx::runtime_mode_default);

Description

This is a simplified main entry point, which can be used to set up the runtime for an HPX application (the runtime system will be set up in console mode or worker mode depending on the command line settings).

[Note]Note

The created runtime system instance will be executed in console or worker mode depending on the command line arguments passed in argc/argv.

Parameters:

argc

[in] The number of command line arguments passed in argv. This is usually the unchanged value as passed by the operating system (to main()).

argv

[in] The command line arguments for this application, usually that is the value as passed by the operating system (to main()).

f

[in] The function to be scheduled as an HPX thread. Usually this function represents the main entry point of any HPX application.

mode

[in] The mode the created runtime environment should be initialized in. There has to be exactly one locality in each HPX application which is executed in console mode (hpx::runtime_mode_console), all other localities have to be run in worker mode (hpx::runtime_mode_worker). Normally this is set up automatically, but sometimes it is necessary to explicitly specify the mode.

Returns:

The function returns the value, which has been returned from the user supplied function f.


Function init

hpx::init — Main entry point for launching the HPX runtime system.

Synopsis

// In header: <hpx/hpx_init.hpp>


int init(util::function_nonser< int(int, char **)> const & f, 
         std::string const & app_name, int argc, char ** argv, 
         hpx::runtime_mode mode = hpx::runtime_mode_default);

Description

This is a simplified main entry point, which can be used to set up the runtime for an HPX application (the runtime system will be set up in console mode or worker mode depending on the command line settings).

[Note]Note

The created runtime system instance will be executed in console or worker mode depending on the command line arguments passed in argc/argv.

Parameters:

app_name

[in] The name of the application.

argc

[in] The number of command line arguments passed in argv. This is usually the unchanged value as passed by the operating system (to main()).

argv

[in] The command line arguments for this application, usually that is the value as passed by the operating system (to main()).

f

[in] The function to be scheduled as an HPX thread. Usually this function represents the main entry point of any HPX application.

mode

[in] The mode the created runtime environment should be initialized in. There has to be exactly one locality in each HPX application which is executed in console mode (hpx::runtime_mode_console), all other localities have to be run in worker mode (hpx::runtime_mode_worker). Normally this is set up automatically, but sometimes it is necessary to explicitly specify the mode.

Returns:

The function returns the value, which has been returned from the user supplied function f.


Function init

hpx::init — Main entry point for launching the HPX runtime system.

Synopsis

// In header: <hpx/hpx_init.hpp>


int init(util::function_nonser< int(int, char **)> const & f, int argc, 
         char ** argv, hpx::runtime_mode mode = hpx::runtime_mode_default);

Description

This is a simplified main entry point, which can be used to set up the runtime for an HPX application (the runtime system will be set up in console mode or worker mode depending on the command line settings).

[Note]Note

The created runtime system instance will be executed in console or worker mode depending on the command line arguments passed in argc/argv.

Parameters:

argc

[in] The number of command line arguments passed in argv. This is usually the unchanged value as passed by the operating system (to main()).

argv

[in] The command line arguments for this application, usually that is the value as passed by the operating system (to main()).

f

[in] The function to be scheduled as an HPX thread. Usually this function represents the main entry point of any HPX application.

mode

[in] The mode the created runtime environment should be initialized in. There has to be exactly one locality in each HPX application which is executed in console mode (hpx::runtime_mode_console), all other localities have to be run in worker mode (hpx::runtime_mode_worker). Normally this is set up automatically, but sometimes it is necessary to explicitly specify the mode.

Returns:

The function returns the value, which has been returned from the user supplied function f.

namespace hpx_startup {
}namespace hpx {
  bool start(util::function_nonser< int(boost::program_options::variables_map &vm)> const &, 
             boost::program_options::options_description const &, int, 
             char **, std::vector< std::string > const &, 
             util::function_nonser< void()> const & = util::function_nonser< void()>(), 
             util::function_nonser< void()> const & = util::function_nonser< void()>(), 
             hpx::runtime_mode = hpx::runtime_mode_default);
  bool start(int(*)(boost::program_options::variables_map &vm), 
             boost::program_options::options_description const &, int, 
             char **, 
             util::function_nonser< void()> const & = util::function_nonser< void()>(), 
             util::function_nonser< void()> const & = util::function_nonser< void()>(), 
             hpx::runtime_mode = hpx::runtime_mode_default);
  bool start(boost::program_options::options_description const &, int, 
             char **, 
             util::function_nonser< void()> const & = util::function_nonser< void()>(), 
             util::function_nonser< void()> const & = util::function_nonser< void()>(), 
             hpx::runtime_mode = hpx::runtime_mode_default);
  bool start(boost::program_options::options_description const &, int, 
             char **, std::vector< std::string > const &, 
             util::function_nonser< void()> const & = util::function_nonser< void()>(), 
             util::function_nonser< void()> const & = util::function_nonser< void()>(), 
             hpx::runtime_mode = hpx::runtime_mode_default);
  bool start(int, char **, std::vector< std::string > const &, 
             hpx::runtime_mode = hpx::runtime_mode_default);
  bool start(boost::program_options::options_description const &, int, 
             char **, hpx::runtime_mode);
  bool start(std::string const &, int = 0, char ** = 0, 
             hpx::runtime_mode = hpx::runtime_mode_default);
  bool start(int = 0, char ** = 0, 
             hpx::runtime_mode = hpx::runtime_mode_default);
  bool start(std::vector< std::string > const &, 
             hpx::runtime_mode = hpx::runtime_mode_default);
  bool start(int(*)(boost::program_options::variables_map &vm), 
             std::string const &, int, char **, 
             hpx::runtime_mode = hpx::runtime_mode_default);
  bool start(util::function_nonser< int(int, char **)> const &, 
             std::string const &, int, char **, 
             hpx::runtime_mode = hpx::runtime_mode_default);
  bool start(int(*)(boost::program_options::variables_map &vm), int, char **, 
             hpx::runtime_mode = hpx::runtime_mode_default);
  bool start(util::function_nonser< int(int, char **)> const &, int, char **, 
             hpx::runtime_mode = hpx::runtime_mode_default);
}

Function start

hpx::start — Main non-blocking entry point for launching the HPX runtime system.

Synopsis

// In header: <hpx/hpx_start.hpp>


bool start(util::function_nonser< int(boost::program_options::variables_map &vm)> const & f, 
           boost::program_options::options_description const & desc_cmdline, 
           int argc, char ** argv, std::vector< std::string > const & cfg, 
           util::function_nonser< void()> const & startup = util::function_nonser< void()>(), 
           util::function_nonser< void()> const & shutdown = util::function_nonser< void()>(), 
           hpx::runtime_mode mode = hpx::runtime_mode_default);

Description

This is the main, non-blocking entry point for any HPX application. This function (or one of its overloads below) should be called from the users main() function. It will set up the HPX runtime environment and schedule the function given by f as a HPX thread. It will return immediatly after that. Use hpx::wait and hpx::stop to synchronize with the runtime system's execution.

[Note]Note

If the parameter mode is not given (defaulted), the created runtime system instance will be executed in console or worker mode depending on the command line arguments passed in argc/argv. Otherwise it will be executed as specified by the parametermode.

Parameters:

argc

[in] The number of command line arguments passed in argv. This is usually the unchanged value as passed by the operating system (to main()).

argv

[in] The command line arguments for this application, usually that is the value as passed by the operating system (to main()).

cfg

A list of configuration settings which will be added to the system configuration before the runtime instance is run. Each of the entries in this list must have the format of a fully defined key/value pair from an ini-file (for instance 'hpx.component.enabled=1')

desc_cmdline

[in] This parameter may hold the description of additional command line arguments understood by the application. These options will be prepended to the default command line options understood by hpx::init (see description below).

f

[in] The function to be scheduled as an HPX thread. Usually this function represents the main entry point of any HPX application.

mode

[in] The mode the created runtime environment should be initialized in. There has to be exactly one locality in each HPX application which is executed in console mode (hpx::runtime_mode_console), all other localities have to be run in worker mode (hpx::runtime_mode_worker). Normally this is set up automatically, but sometimes it is necessary to explicitly specify the mode.

shutdown

[in] A function to be executed inside an HPX thread while hpx::finalize is executed. If this parameter is not given no function will be executed.

startup

[in] A function to be executed inside a HPX thread before f is called. If this parameter is not given no function will be executed.

Returns:

The function returns true if command line processing succeeded and the runtime system was started successfully. It will return false otherwise.


Function start

hpx::start — Main non-blocking entry point for launching the HPX runtime system.

Synopsis

// In header: <hpx/hpx_start.hpp>


bool start(int(*)(boost::program_options::variables_map &vm) f, 
           boost::program_options::options_description const & desc_cmdline, 
           int argc, char ** argv, 
           util::function_nonser< void()> const & startup = util::function_nonser< void()>(), 
           util::function_nonser< void()> const & shutdown = util::function_nonser< void()>(), 
           hpx::runtime_mode mode = hpx::runtime_mode_default);

Description

This is the main, non-blocking entry point for any HPX application. This function (or one of its overloads below) should be called from the users main() function. It will set up the HPX runtime environment and schedule the function given by f as a HPX thread. It will return immediatly after that. Use hpx::wait and hpx::stop to synchronize with the runtime system's execution.

[Note]Note

If the parameter mode is not given (defaulted), the created runtime system instance will be executed in console or worker mode depending on the command line arguments passed in argc/argv. Otherwise it will be executed as specified by the parametermode.

Parameters:

argc

[in] The number of command line arguments passed in argv. This is usually the unchanged value as passed by the operating system (to main()).

argv

[in] The command line arguments for this application, usually that is the value as passed by the operating system (to main()).

desc_cmdline

[in] This parameter may hold the description of additional command line arguments understood by the application. These options will be prepended to the default command line options understood by hpx::init (see description below).

f

[in] The function to be scheduled as an HPX thread. Usually this function represents the main entry point of any HPX application.

mode

[in] The mode the created runtime environment should be initialized in. There has to be exactly one locality in each HPX application which is executed in console mode (hpx::runtime_mode_console), all other localities have to be run in worker mode (hpx::runtime_mode_worker). Normally this is set up automatically, but sometimes it is necessary to explicitly specify the mode.

shutdown

[in] A function to be executed inside an HPX thread while hpx::finalize is executed. If this parameter is not given no function will be executed.

startup

[in] A function to be executed inside a HPX thread before f is called. If this parameter is not given no function will be executed.

Returns:

The function returns true if command line processing succeeded and the runtime system was started successfully. It will return false otherwise.


Function start

hpx::start — Main non-blocking entry point for launching the HPX runtime system.

Synopsis

// In header: <hpx/hpx_start.hpp>


bool start(boost::program_options::options_description const & desc_cmdline, 
           int argc, char ** argv, 
           util::function_nonser< void()> const & startup = util::function_nonser< void()>(), 
           util::function_nonser< void()> const & shutdown = util::function_nonser< void()>(), 
           hpx::runtime_mode mode = hpx::runtime_mode_default);

Description

This is a simplified main, non-blocking entry point, which can be used to set up the runtime for an HPX application (the runtime system will be set up in console mode or worker mode depending on the command line settings). It will return immediatly after that. Use hpx::wait and hpx::stop to synchronize with the runtime system's execution.

In console mode it will execute the user supplied function hpx_main, in worker mode it will execute an empty hpx_main.

[Note]Note

If the parameter mode is not given (defaulted), the created runtime system instance will be executed in console or worker mode depending on the command line arguments passed in argc/argv. Otherwise it will be executed as specified by the parametermode.

Parameters:

argc

[in] The number of command line arguments passed in argv. This is usually the unchanged value as passed by the operating system (to main()).

argv

[in] The command line arguments for this application, usually that is the value as passed by the operating system (to main()).

desc_cmdline

[in] This parameter may hold the description of additional command line arguments understood by the application. These options will be prepended to the default command line options understood by hpx::init (see description below).

mode

[in] The mode the created runtime environment should be initialized in. There has to be exactly one locality in each HPX application which is executed in console mode (hpx::runtime_mode_console), all other localities have to be run in worker mode (hpx::runtime_mode_worker). Normally this is set up automatically, but sometimes it is necessary to explicitly specify the mode.

shutdown

[in] A function to be executed inside an HPX thread while hpx::finalize is executed. If this parameter is not given no function will be executed.

startup

[in] A function to be executed inside a HPX thread before f is called. If this parameter is not given no function will be executed.

Returns:

The function returns true if command line processing succeeded and the runtime system was started successfully. It will return false otherwise.


Function start

hpx::start — Main non-blocking entry point for launching the HPX runtime system.

Synopsis

// In header: <hpx/hpx_start.hpp>


bool start(boost::program_options::options_description const & desc_cmdline, 
           int argc, char ** argv, std::vector< std::string > const & cfg, 
           util::function_nonser< void()> const & startup = util::function_nonser< void()>(), 
           util::function_nonser< void()> const & shutdown = util::function_nonser< void()>(), 
           hpx::runtime_mode mode = hpx::runtime_mode_default);

Description

This is a simplified main, non-blocking entry point, which can be used to set up the runtime for an HPX application (the runtime system will be set up in console mode or worker mode depending on the command line settings). It will return immediatly after that. Use hpx::wait and hpx::stop to synchronize with the runtime system's execution.

In console mode it will execute the user supplied function hpx_main, in worker mode it will execute an empty hpx_main.

[Note]Note

If the parameter mode is not given (defaulted), the created runtime system instance will be executed in console or worker mode depending on the command line arguments passed in argc/argv. Otherwise it will be executed as specified by the parametermode.

Parameters:

argc

[in] The number of command line arguments passed in argv. This is usually the unchanged value as passed by the operating system (to main()).

argv

[in] The command line arguments for this application, usually that is the value as passed by the operating system (to main()).

cfg

A list of configuration settings which will be added to the system configuration before the runtime instance is run. Each of the entries in this list must have the format of a fully defined key/value pair from an ini-file (for instance 'hpx.component.enabled=1')

desc_cmdline

[in] This parameter may hold the description of additional command line arguments understood by the application. These options will be prepended to the default command line options understood by hpx::init (see description below).

mode

[in] The mode the created runtime environment should be initialized in. There has to be exactly one locality in each HPX application which is executed in console mode (hpx::runtime_mode_console), all other localities have to be run in worker mode (hpx::runtime_mode_worker). Normally this is set up automatically, but sometimes it is necessary to explicitly specify the mode.

shutdown

[in] A function to be executed inside an HPX thread while hpx::finalize is executed. If this parameter is not given no function will be executed.

startup

[in] A function to be executed inside a HPX thread before f is called. If this parameter is not given no function will be executed.

Returns:

The function returns true if command line processing succeeded and the runtime system was started successfully. It will return false otherwise.


Function start

hpx::start — Main non-blocking entry point for launching the HPX runtime system.

Synopsis

// In header: <hpx/hpx_start.hpp>


bool start(int argc, char ** argv, std::vector< std::string > const & cfg, 
           hpx::runtime_mode mode = hpx::runtime_mode_default);

Description

This is a simplified main, non-blocking entry point, which can be used to set up the runtime for an HPX application (the runtime system will be set up in console mode or worker mode depending on the command line settings). It will return immediatly after that. Use hpx::wait and hpx::stop to synchronize with the runtime system's execution.

In console mode it will execute the user supplied function hpx_main, in worker mode it will execute an empty hpx_main.

[Note]Note

If the parameter mode is runtime_mode_default, the created runtime system instance will be executed in console or worker mode depending on the command line arguments passed in argc/argv. Otherwise it will be executed as specified by the parametermode.

Parameters:

argc

[in] The number of command line arguments passed in argv. This is usually the unchanged value as passed by the operating system (to main()).

argv

[in] The command line arguments for this application, usually that is the value as passed by the operating system (to main()).

cfg

A list of configuration settings which will be added to the system configuration before the runtime instance is run. Each of the entries in this list must have the format of a fully defined key/value pair from an ini-file (for instance 'hpx.component.enabled=1')

mode

[in] The mode the created runtime environment should be initialized in. There has to be exactly one locality in each HPX application which is executed in console mode (hpx::runtime_mode_console), all other localities have to be run in worker mode (hpx::runtime_mode_worker). Normally this is set up automatically, but sometimes it is necessary to explicitly specify the mode.

Returns:

The function returns true if command line processing succeeded and the runtime system was started successfully. It will return false otherwise.


Function start

hpx::start — Main non-blocking entry point for launching the HPX runtime system.

Synopsis

// In header: <hpx/hpx_start.hpp>


bool start(boost::program_options::options_description const & desc_cmdline, 
           int argc, char ** argv, hpx::runtime_mode mode);

Description

This is a simplified main, non-blocking entry point, which can be used to set up the runtime for an HPX application (the runtime system will be set up in console mode or worker mode depending on the command line settings). It will return immediatly after that. Use hpx::wait and hpx::stop to synchronize with the runtime system's execution.

In console mode it will execute the user supplied function hpx_main, in worker mode it will execute an empty hpx_main.

[Note]Note

If the parameter mode is runtime_mode_default, the created runtime system instance will be executed in console or worker mode depending on the command line arguments passed in argc/argv. Otherwise it will be executed as specified by the parametermode.

Parameters:

argc

[in] The number of command line arguments passed in argv. This is usually the unchanged value as passed by the operating system (to main()).

argv

[in] The command line arguments for this application, usually that is the value as passed by the operating system (to main()).

desc_cmdline

[in] This parameter may hold the description of additional command line arguments understood by the application. These options will be prepended to the default command line options understood by hpx::init (see description below).

mode

[in] The mode the created runtime environment should be initialized in. There has to be exactly one locality in each HPX application which is executed in console mode (hpx::runtime_mode_console), all other localities have to be run in worker mode (hpx::runtime_mode_worker). Normally this is set up automatically, but sometimes it is necessary to explicitly specify the mode.

Returns:

The function returns true if command line processing succeeded and the runtime system was started successfully. It will return false otherwise.


Function start

hpx::start — Main non-blocking entry point for launching the HPX runtime system.

Synopsis

// In header: <hpx/hpx_start.hpp>


bool start(std::string const & app_name, int argc = 0, char ** argv = 0, 
           hpx::runtime_mode mode = hpx::runtime_mode_default);

Description

This is a simplified main, non-blocking entry point, which can be used to set up the runtime for an HPX application (the runtime system will be set up in console mode or worker mode depending on the command line settings). It will return immediatly after that. Use hpx::wait and hpx::stop to synchronize with the runtime system's execution.

[Note]Note

The created runtime system instance will be executed in console or worker mode depending on the command line arguments passed in argc/argv.

Parameters:

app_name

[in] The name of the application.

argc

[in] The number of command line arguments passed in argv. This is usually the unchanged value as passed by the operating system (to main()).

argv

[in] The command line arguments for this application, usually that is the value as passed by the operating system (to main()).

mode

[in] The mode the created runtime environment should be initialized in. There has to be exactly one locality in each HPX application which is executed in console mode (hpx::runtime_mode_console), all other localities have to be run in worker mode (hpx::runtime_mode_worker). Normally this is set up automatically, but sometimes it is necessary to explicitly specify the mode.

Returns:

The function returns true if command line processing succeeded and the runtime system was started successfully. It will return false otherwise.


Function start

hpx::start — Main non-blocking entry point for launching the HPX runtime system.

Synopsis

// In header: <hpx/hpx_start.hpp>


bool start(int argc = 0, char ** argv = 0, 
           hpx::runtime_mode mode = hpx::runtime_mode_default);

Description

This is a simplified main, non-blocking entry point, which can be used to set up the runtime for an HPX application (the runtime system will be set up in console mode or worker mode depending on the command line settings). It will return immediatly after that. Use hpx::wait and hpx::stop to synchronize with the runtime system's execution.

[Note]Note

The created runtime system instance will be executed in console or worker mode depending on the command line arguments passed in argc/argv. If not command line arguments are passed, console mode is assumed.

If no command line arguments are passed the HPX runtime system will not support any of the default command line options as described in the section 'HPX Command Line Options'.

Parameters:

argc

[in] The number of command line arguments passed in argv. This is usually the unchanged value as passed by the operating system (to main()).

argv

[in] The command line arguments for this application, usually that is the value as passed by the operating system (to main()).

mode

[in] The mode the created runtime environment should be initialized in. There has to be exactly one locality in each HPX application which is executed in console mode (hpx::runtime_mode_console), all other localities have to be run in worker mode (hpx::runtime_mode_worker). Normally this is set up automatically, but sometimes it is necessary to explicitly specify the mode.

Returns:

The function returns true if command line processing succeeded and the runtime system was started successfully. It will return false otherwise.


Function start

hpx::start — Main non-blocking entry point for launching the HPX runtime system.

Synopsis

// In header: <hpx/hpx_start.hpp>


bool start(std::vector< std::string > const & cfg, 
           hpx::runtime_mode mode = hpx::runtime_mode_default);

Description

This is a simplified main, non-blocking entry point, which can be used to set up the runtime for an HPX application (the runtime system will be set up in console mode or worker mode depending on the command line settings). It will return immediatly after that. Use hpx::wait and hpx::stop to synchronize with the runtime system's execution.

[Note]Note

The created runtime system instance will be executed in console or worker mode depending on the command line arguments passed in argc/argv. If not command line arguments are passed, console mode is assumed.

If no command line arguments are passed the HPX runtime system will not support any of the default command line options as described in the section 'HPX Command Line Options'.

Parameters:

cfg

A list of configuration settings which will be added to the system configuration before the runtime instance is run. Each of the entries in this list must have the format of a fully defined key/value pair from an ini-file (for instance 'hpx.component.enabled=1')

mode

[in] The mode the created runtime environment should be initialized in. There has to be exactly one locality in each HPX application which is executed in console mode (hpx::runtime_mode_console), all other localities have to be run in worker mode (hpx::runtime_mode_worker). Normally this is set up automatically, but sometimes it is necessary to explicitly specify the mode.

Returns:

The function returns true if command line processing succeeded and the runtime system was started successfully. It will return false otherwise.


Function start

hpx::start — Main non-blocking entry point for launching the HPX runtime system.

Synopsis

// In header: <hpx/hpx_start.hpp>


bool start(int(*)(boost::program_options::variables_map &vm) f, 
           std::string const & app_name, int argc, char ** argv, 
           hpx::runtime_mode mode = hpx::runtime_mode_default);

Description

This is a simplified main, non-blocking entry point, which can be used to set up the runtime for an HPX application (the runtime system will be set up in console mode or worker mode depending on the command line settings). It will return immediatly after that. Use hpx::wait and

[Note]Note

The created runtime system instance will be executed in console or worker mode depending on the command line arguments passed in argc/argv.

Parameters:

app_name

[in] The name of the application.

argc

[in] The number of command line arguments passed in argv. This is usually the unchanged value as passed by the operating system (to main()).

argv

[in] The command line arguments for this application, usually that is the value as passed by the operating system (to main()).

f

[in] The function to be scheduled as an HPX thread. Usually this function represents the main entry point of any HPX application.

mode

[in] The mode the created runtime environment should be initialized in. There has to be exactly one locality in each HPX application which is executed in console mode (hpx::runtime_mode_console), all other localities have to be run in worker mode (hpx::runtime_mode_worker). Normally this is set up automatically, but sometimes it is necessary to explicitly specify the mode.

Returns:

The function returns true if command line processing succeeded and the runtime system was started successfully. It will return false otherwise.


Function start

hpx::start — Main non-blocking entry point for launching the HPX runtime system.

Synopsis

// In header: <hpx/hpx_start.hpp>


bool start(util::function_nonser< int(int, char **)> const &, 
           std::string const & app_name, int argc, char ** argv, 
           hpx::runtime_mode mode = hpx::runtime_mode_default);

Description

This is a simplified main, non-blocking entry point, which can be used to set up the runtime for an HPX application (the runtime system will be set up in console mode or worker mode depending on the command line settings). It will return immediatly after that. Use hpx::wait and

[Note]Note

The created runtime system instance will be executed in console or worker mode depending on the command line arguments passed in argc/argv.

Parameters:

app_name

[in] The name of the application.

argc

[in] The number of command line arguments passed in argv. This is usually the unchanged value as passed by the operating system (to main()).

argv

[in] The command line arguments for this application, usually that is the value as passed by the operating system (to main()).

mode

[in] The mode the created runtime environment should be initialized in. There has to be exactly one locality in each HPX application which is executed in console mode (hpx::runtime_mode_console), all other localities have to be run in worker mode (hpx::runtime_mode_worker). Normally this is set up automatically, but sometimes it is necessary to explicitly specify the mode.

Returns:

The function returns true if command line processing succeeded and the runtime system was started successfully. It will return false otherwise.


Function start

hpx::start — Main non-blocking entry point for launching the HPX runtime system.

Synopsis

// In header: <hpx/hpx_start.hpp>


bool start(int(*)(boost::program_options::variables_map &vm) f, int argc, 
           char ** argv, hpx::runtime_mode mode = hpx::runtime_mode_default);

Description

This is a simplified main, non-blocking entry point, which can be used to set up the runtime for an HPX application (the runtime system will be set up in console mode or worker mode depending on the command line settings). It will return immediatly after that. Use hpx::wait and

[Note]Note

The created runtime system instance will be executed in console or worker mode depending on the command line arguments passed in argc/argv.

Parameters:

argc

[in] The number of command line arguments passed in argv. This is usually the unchanged value as passed by the operating system (to main()).

argv

[in] The command line arguments for this application, usually that is the value as passed by the operating system (to main()).

f

[in] The function to be scheduled as an HPX thread. Usually this function represents the main entry point of any HPX application.

mode

[in] The mode the created runtime environment should be initialized in. There has to be exactly one locality in each HPX application which is executed in console mode (hpx::runtime_mode_console), all other localities have to be run in worker mode (hpx::runtime_mode_worker). Normally this is set up automatically, but sometimes it is necessary to explicitly specify the mode.

Returns:

The function returns true if command line processing succeeded and the runtime system was started successfully. It will return false otherwise.


Function start

hpx::start — Main non-blocking entry point for launching the HPX runtime system.

Synopsis

// In header: <hpx/hpx_start.hpp>


bool start(util::function_nonser< int(int, char **)> const & f, int argc, 
           char ** argv, hpx::runtime_mode mode = hpx::runtime_mode_default);

Description

This is a simplified main, non-blocking entry point, which can be used to set up the runtime for an HPX application (the runtime system will be set up in console mode or worker mode depending on the command line settings). It will return immediatly after that. Use hpx::wait and

[Note]Note

The created runtime system instance will be executed in console or worker mode depending on the command line arguments passed in argc/argv.

Parameters:

argc

[in] The number of command line arguments passed in argv. This is usually the unchanged value as passed by the operating system (to main()).

argv

[in] The command line arguments for this application, usually that is the value as passed by the operating system (to main()).

f

[in] The function to be scheduled as an HPX thread. Usually this function represents the main entry point of any HPX application.

mode

[in] The mode the created runtime environment should be initialized in. There has to be exactly one locality in each HPX application which is executed in console mode (hpx::runtime_mode_console), all other localities have to be run in worker mode (hpx::runtime_mode_worker). Normally this is set up automatically, but sometimes it is necessary to explicitly specify the mode.

Returns:

The function returns true if command line processing succeeded and the runtime system was started successfully. It will return false otherwise.

namespace hpx {
  namespace lcos {
    template<typename Action, typename ArgN, ... > 
      hpx::future< std::vector< decltype(Action(hpx::id_type, ArgN,...))> > 
      broadcast(std::vector< hpx::id_type > const &, ArgN, ...);
    template<typename Action, typename ArgN, ... > 
      void broadcast_apply(std::vector< hpx::id_type > const &, ArgN, ...);
    template<typename Action, typename ArgN, ... > 
      hpx::future< std::vector< decltype(Action(hpx::id_type, ArgN,..., std::size_t))> > 
      broadcast_with_index(std::vector< hpx::id_type > const &, ArgN, ...);
    template<typename Action, typename ArgN, ... > 
      void broadcast_apply_with_index(std::vector< hpx::id_type > const &, 
                                      ArgN, ...);
  }
}

Function template broadcast

hpx::lcos::broadcast — Perform a distributed broadcast operation.

Synopsis

// In header: <hpx/lcos/broadcast.hpp>


template<typename Action, typename ArgN, ... > 
  hpx::future< std::vector< decltype(Action(hpx::id_type, ArgN,...))> > 
  broadcast(std::vector< hpx::id_type > const & ids, ArgN argN, ...);

Description

The function hpx::lcos::broadcast performs a distributed broadcast operation resulting in action invocations on a given set of global identifiers. The action can be either a plain action (in which case the global identifiers have to refer to localities) or a component action (in which case the global identifiers have to refer to instances of a component type which exposes the action.

The given action is invoked asynchronously on all given identifiers, and the arguments ArgN are passed along to those invocations.

[Note]Note

If decltype(Action(...)) is void, then the result of this function is future<void>.

Parameters:

argN

[in] Any number of arbitrary arguments (passed by by const reference) which will be forwarded to the action invocation.

ids

[in] A list of global identifiers identifying the target objects for which the given action will be invoked.

Returns:

This function returns a future representing the result of the overall reduction operation.


Function template broadcast_apply

hpx::lcos::broadcast_apply — Perform an asynchronous (fire&forget) distributed broadcast operation.

Synopsis

// In header: <hpx/lcos/broadcast.hpp>


template<typename Action, typename ArgN, ... > 
  void broadcast_apply(std::vector< hpx::id_type > const & ids, ArgN argN, 
                       ...);

Description

The function hpx::lcos::broadcast_apply performs an asynchronous (fire&forget) distributed broadcast operation resulting in action invocations on a given set of global identifiers. The action can be either a plain action (in which case the global identifiers have to refer to localities) or a component action (in which case the global identifiers have to refer to instances of a component type which exposes the action.

The given action is invoked asynchronously on all given identifiers, and the arguments ArgN are passed along to those invocations.

Parameters:

argN

[in] Any number of arbitrary arguments (passed by by const reference) which will be forwarded to the action invocation.

ids

[in] A list of global identifiers identifying the target objects for which the given action will be invoked.


Function template broadcast_with_index

hpx::lcos::broadcast_with_index — Perform a distributed broadcast operation.

Synopsis

// In header: <hpx/lcos/broadcast.hpp>


template<typename Action, typename ArgN, ... > 
  hpx::future< std::vector< decltype(Action(hpx::id_type, ArgN,..., std::size_t))> > 
  broadcast_with_index(std::vector< hpx::id_type > const & ids, ArgN argN, 
                       ...);

Description

The function hpx::lcos::broadcast_with_index performs a distributed broadcast operation resulting in action invocations on a given set of global identifiers. The action can be either a plain action (in which case the global identifiers have to refer to localities) or a component action (in which case the global identifiers have to refer to instances of a component type which exposes the action.

The given action is invoked asynchronously on all given identifiers, and the arguments ArgN are passed along to those invocations.

The function passes the index of the global identifier in the given list of identifiers as the last argument to the action.

[Note]Note

If decltype(Action(...)) is void, then the result of this function is future<void>.

Parameters:

argN

[in] Any number of arbitrary arguments (passed by by const reference) which will be forwarded to the action invocation.

ids

[in] A list of global identifiers identifying the target objects for which the given action will be invoked.

Returns:

This function returns a future representing the result of the overall reduction operation.


Function template broadcast_apply_with_index

hpx::lcos::broadcast_apply_with_index — Perform an asynchronous (fire&forget) distributed broadcast operation.

Synopsis

// In header: <hpx/lcos/broadcast.hpp>


template<typename Action, typename ArgN, ... > 
  void broadcast_apply_with_index(std::vector< hpx::id_type > const & ids, 
                                  ArgN argN, ...);

Description

The function hpx::lcos::broadcast_apply_with_index performs an asynchronous (fire&forget) distributed broadcast operation resulting in action invocations on a given set of global identifiers. The action can be either a plain action (in which case the global identifiers have to refer to localities) or a component action (in which case the global identifiers have to refer to instances of a component type which exposes the action.

The given action is invoked asynchronously on all given identifiers, and the arguments ArgN are passed along to those invocations.

The function passes the index of the global identifier in the given list of identifiers as the last argument to the action.

Parameters:

argN

[in] Any number of arbitrary arguments (passed by by const reference) which will be forwarded to the action invocation.

ids

[in] A list of global identifiers identifying the target objects for which the given action will be invoked.

namespace hpx {
  namespace lcos {
    template<typename Action, typename FoldOp, typename Init, typename ArgN, 
             ... > 
      hpx::future< decltype(Action(hpx::id_type, ArgN,...))> 
      fold(std::vector< hpx::id_type > const &, FoldOp &&, Init &&, ArgN, ...);
    template<typename Action, typename FoldOp, typename Init, typename ArgN, 
             ... > 
      hpx::future< decltype(Action(hpx::id_type, ArgN,..., std::size_t))> 
      fold_with_index(std::vector< hpx::id_type > const &, FoldOp &&, Init &&, 
                      ArgN, ...);
    template<typename Action, typename FoldOp, typename Init, typename ArgN, 
             ... > 
      hpx::future< decltype(Action(hpx::id_type, ArgN,...))> 
      inverse_fold(std::vector< hpx::id_type > const &, FoldOp &&, Init &&, 
                   ArgN, ...);
    template<typename Action, typename FoldOp, typename Init, typename ArgN, 
             ... > 
      hpx::future< decltype(Action(hpx::id_type, ArgN,..., std::size_t))> 
      inverse_fold_with_index(std::vector< hpx::id_type > const &, FoldOp &&, 
                              Init &&, ArgN, ...);
  }
}

Function template fold

hpx::lcos::fold — Perform a distributed fold operation.

Synopsis

// In header: <hpx/lcos/fold.hpp>


template<typename Action, typename FoldOp, typename Init, typename ArgN, ... > 
  hpx::future< decltype(Action(hpx::id_type, ArgN,...))> 
  fold(std::vector< hpx::id_type > const & ids, FoldOp && fold_op, 
       Init && init, ArgN argN, ...);

Description

The function hpx::lcos::fold performs a distributed folding operation over results returned from action invocations on a given set of global identifiers. The action can be either a plain action (in which case the global identifiers have to refer to localities) or a component action (in which case the global identifiers have to refer to instances of a component type which exposes the action.

[Note]Note

The type of the initial value must be convertible to the result type returned from the invoked action.

Parameters:

argN

[in] Any number of arbitrary arguments (passed by value, by const reference or by rvalue reference) which will be forwarded to the action invocation.

fold_op

[in] A binary function expecting two results as returned from the action invocations. The function (or function object) is expected to return the result of the folding operation performed on its arguments.

ids

[in] A list of global identifiers identifying the target objects for which the given action will be invoked.

init

[in] The initial value to be used for the folding operation

Returns:

This function returns a future representing the result of the overall folding operation.


Function template fold_with_index

hpx::lcos::fold_with_index — Perform a distributed folding operation.

Synopsis

// In header: <hpx/lcos/fold.hpp>


template<typename Action, typename FoldOp, typename Init, typename ArgN, ... > 
  hpx::future< decltype(Action(hpx::id_type, ArgN,..., std::size_t))> 
  fold_with_index(std::vector< hpx::id_type > const & ids, FoldOp && fold_op, 
                  Init && init, ArgN argN, ...);

Description

The function hpx::lcos::fold_with_index performs a distributed folding operation over results returned from action invocations on a given set of global identifiers. The action can be either plain action (in which case the global identifiers have to refer to localities) or a component action (in which case the global identifiers have to refer to instances of a component type which exposes the action.

The function passes the index of the global identifier in the given list of identifiers as the last argument to the action.

[Note]Note

The type of the initial value must be convertible to the result type returned from the invoked action.

Parameters:

argN

[in] Any number of arbitrary arguments (passed by value, by const reference or by rvalue reference) which will be forwarded to the action invocation.

fold_op

[in] A binary function expecting two results as returned from the action invocations. The function (or function object) is expected to return the result of the folding operation performed on its arguments.

ids

[in] A list of global identifiers identifying the target objects for which the given action will be invoked.

init

[in] The initial value to be used for the folding operation

Returns:

This function returns a future representing the result of the overall folding operation.


Function template inverse_fold

hpx::lcos::inverse_fold — Perform a distributed inverse folding operation.

Synopsis

// In header: <hpx/lcos/fold.hpp>


template<typename Action, typename FoldOp, typename Init, typename ArgN, ... > 
  hpx::future< decltype(Action(hpx::id_type, ArgN,...))> 
  inverse_fold(std::vector< hpx::id_type > const & ids, FoldOp && fold_op, 
               Init && init, ArgN argN, ...);

Description

The function hpx::lcos::inverse_fold performs an inverse distributed folding operation over results returned from action invocations on a given set of global identifiers. The action can be either a plain action (in which case the global identifiers have to refer to localities) or a component action (in which case the global identifiers have to refer to instances of a component type which exposes the action.

[Note]Note

The type of the initial value must be convertible to the result type returned from the invoked action.

Parameters:

argN

[in] Any number of arbitrary arguments (passed by value, by const reference or by rvalue reference) which will be forwarded to the action invocation.

fold_op

[in] A binary function expecting two results as returned from the action invocations. The function (or function object) is expected to return the result of the folding operation performed on its arguments.

ids

[in] A list of global identifiers identifying the target objects for which the given action will be invoked.

init

[in] The initial value to be used for the folding operation

Returns:

This function returns a future representing the result of the overall folding operation.


Function template inverse_fold_with_index

hpx::lcos::inverse_fold_with_index — Perform a distributed inverse folding operation.

Synopsis

// In header: <hpx/lcos/fold.hpp>


template<typename Action, typename FoldOp, typename Init, typename ArgN, ... > 
  hpx::future< decltype(Action(hpx::id_type, ArgN,..., std::size_t))> 
  inverse_fold_with_index(std::vector< hpx::id_type > const & ids, 
                          FoldOp && fold_op, Init && init, ArgN argN, ...);

Description

The function hpx::lcos::inverse_fold_with_index performs an inverse distributed folding operation over results returned from action invocations on a given set of global identifiers. The action can be either plain action (in which case the global identifiers have to refer to localities) or a component action (in which case the global identifiers have to refer to instances of a component type which exposes the action.

The function passes the index of the global identifier in the given list of identifiers as the last argument to the action.

[Note]Note

The type of the initial value must be convertible to the result type returned from the invoked action.

Parameters:

argN

[in] Any number of arbitrary arguments (passed by value, by const reference or by rvalue reference) which will be forwarded to the action invocation.

fold_op

[in] A binary function expecting two results as returned from the action invocations. The function (or function object) is expected to return the result of the folding operation performed on its arguments.

ids

[in] A list of global identifiers identifying the target objects for which the given action will be invoked.

init

[in] The initial value to be used for the folding operation

Returns:

This function returns a future representing the result of the overall folding operation.

namespace hpx {
  namespace lcos {
    template<typename T> 
      hpx::future< std::vector< T > > 
      gather_here(char const *, hpx::future< T >, 
                  std::size_t = std::size_t(-1), 
                  std::size_t = std::size_t(-1), 
                  std::size_t = std::size_t(-1));
    template<typename T> 
      hpx::future< void > 
      gather_there(char const *, hpx::future< T >, 
                   std::size_t = std::size_t(-1), std::size_t = 0, 
                   std::size_t = std::size_t(-1));
    template<typename T> 
      hpx::future< std::vector< typename std::decay< T >::type > > 
      gather_here(char const *, T &&, std::size_t = std::size_t(-1), 
                  std::size_t = std::size_t(-1), 
                  std::size_t = std::size_t(-1));
    template<typename T> 
      hpx::future< void > 
      gather_there(char const *, T &&, std::size_t = std::size_t(-1), 
                   std::size_t = 0, std::size_t = std::size_t(-1));
  }
}

Function template gather_here

hpx::lcos::gather_here

Synopsis

// In header: <hpx/lcos/gather.hpp>


template<typename T> 
  hpx::future< std::vector< T > > 
  gather_here(char const * basename, hpx::future< T > result, 
              std::size_t num_sites = std::size_t(-1), 
              std::size_t generation = std::size_t(-1), 
              std::size_t this_site = std::size_t(-1));

Description

Gather a set of values from different call sites

This function receives a set of values from all call sites operating on the given base name.

Parameters:

basename

The base name identifying the gather operation

generation

The generational counter identifying the sequence number of the gather operation performed on the given base name. This is optional and needs to be supplied only if the gather operation on the given base name has to be performed more than once.

result

A future referring to the value to transmit to the central gather point from this call site.

this_site

The sequence number of this invocation (usually the locality id). This value is optional and defaults to whatever hpx::get_ocality_id() returns.

Returns:

This function returns a future holding a vector with all gathered values. It will become ready once the gather operation has been completed.


Function template gather_there

hpx::lcos::gather_there

Synopsis

// In header: <hpx/lcos/gather.hpp>


template<typename T> 
  hpx::future< void > 
  gather_there(char const * basename, hpx::future< T > result, 
               std::size_t generation = std::size_t(-1), 
               std::size_t root_site = 0, 
               std::size_t this_site = std::size_t(-1));

Description

Gather a given value at the given call site

This function transmits the value given by result to a central gather site (where the corresponding gather_here is executed)

Parameters:

basename

The base name identifying the gather operation

generation

The generational counter identifying the sequence number of the gather operation performed on the given base name. This is optional and needs to be supplied only if the gather operation on the given base name has to be performed more than once.

result

A future referring to the value to transmit to the central gather point from this call site.

root_site

The sequence number of the central gather point (usually the locality id). This value is optional and defaults to 0.

this_site

The sequence number of this invocation (usually the locality id). This value is optional and defaults to whatever hpx::get_ocality_id() returns.

Returns:

This function returns a future which will become ready once the gather operation has been completed.


Function template gather_here

hpx::lcos::gather_here

Synopsis

// In header: <hpx/lcos/gather.hpp>


template<typename T> 
  hpx::future< std::vector< typename std::decay< T >::type > > 
  gather_here(char const * basename, T && result, 
              std::size_t num_sites = std::size_t(-1), 
              std::size_t generation = std::size_t(-1), 
              std::size_t this_site = std::size_t(-1));

Description

Gather a set of values from different call sites

This function receives a set of values from all call sites operating on the given base name.

Parameters:

basename

The base name identifying the gather operation

generation

The generational counter identifying the sequence number of the gather operation performed on the given base name. This is optional and needs to be supplied only if the gather operation on the given base name has to be performed more than once.

result

The value to transmit to the central gather point from this call site.

this_site

The sequence number of this invocation (usually the locality id). This value is optional and defaults to whatever hpx::get_ocality_id() returns.

Returns:

This function returns a future holding a vector with all gathered values. It will become ready once the gather operation has been completed.


Function template gather_there

hpx::lcos::gather_there

Synopsis

// In header: <hpx/lcos/gather.hpp>


template<typename T> 
  hpx::future< void > 
  gather_there(char const * basename, T && result, 
               std::size_t generation = std::size_t(-1), 
               std::size_t root_site = 0, 
               std::size_t this_site = std::size_t(-1));

Description

Gather a given value at the given call site

This function transmits the value given by result to a central gather site (where the corresponding gather_here is executed)

Parameters:

basename

The base name identifying the gather operation

generation

The generational counter identifying the sequence number of the gather operation performed on the given base name. This is optional and needs to be supplied only if the gather operation on the given base name has to be performed more than once.

result

The value to transmit to the central gather point from this call site.

root_site

The sequence number of the central gather point (usually the locality id). This value is optional and defaults to 0.

this_site

The sequence number of this invocation (usually the locality id). This value is optional and defaults to whatever hpx::get_ocality_id() returns.

Returns:

This function returns a future which will become ready once the gather operation has been completed.

namespace hpx {
  template<typename InputIter> void wait_all(InputIter, InputIter);
  template<typename R> void wait_all(std::vector< future< R >> &&);
  template<typename... T> void wait_all(T &&...);
  template<typename InputIter> InputIter wait_all_n(InputIter, std::size_t);
}

Function template wait_all

hpx::wait_all

Synopsis

// In header: <hpx/lcos/wait_all.hpp>


template<typename InputIter> void wait_all(InputIter first, InputIter last);

Description

The function wait_all is a operator allowing to join on the result of all given futures. It AND-composes all future objects given and returns after they finished executing.

[Note]Note

The function wait_all returns after all futures have become ready. All input futures are still valid after wait_all returns.

Parameters:

first

The iterator pointing to the first element of a sequence of future or shared_future objects for which wait_all should wait.

last

The iterator pointing to the last element of a sequence of future or shared_future objects for which wait_all should wait.


Function template wait_all

hpx::wait_all

Synopsis

// In header: <hpx/lcos/wait_all.hpp>


template<typename R> void wait_all(std::vector< future< R >> && futures);

Description

The function wait_all is a operator allowing to join on the result of all given futures. It AND-composes all future objects given and returns after they finished executing.

[Note]Note

The function wait_all returns after all futures have become ready. All input futures are still valid after wait_all returns.

Parameters:

futures

A vector holding an arbitrary amount of future or shared_future objects for which wait_all should wait.


Function template wait_all

hpx::wait_all

Synopsis

// In header: <hpx/lcos/wait_all.hpp>


template<typename... T> void wait_all(T &&... futures);

Description

The function wait_all is a operator allowing to join on the result of all given futures. It AND-composes all future objects given and returns after they finished executing.

[Note]Note

The function wait_all returns after all futures have become ready. All input futures are still valid after wait_all returns.

Parameters:

futures

An arbitrary number of future or shared_future objects, possibly holding different types for which wait_all should wait.


Function template wait_all_n

hpx::wait_all_n

Synopsis

// In header: <hpx/lcos/wait_all.hpp>


template<typename InputIter> 
  InputIter wait_all_n(InputIter begin, std::size_t count);

Description

The function wait_all_n is a operator allowing to join on the result of all given futures. It AND-composes all future objects given and returns after they finished executing.

[Note]Note

The function wait_all_n returns after all futures have become ready. All input futures are still valid after wait_all_n returns.

Parameters:

begin

The iterator pointing to the first element of a sequence of future or shared_future objects for which wait_all_n should wait.

count

The number of elements in the sequence starting at first.

Returns:

The function wait_all_n will return an iterator referring to the first element in the input sequence after the last processed element.

namespace hpx {
  template<typename InputIter> 
    void wait_any(InputIter, InputIter, error_code & = throws);
  template<typename R> 
    void wait_any(std::vector< future< R >> &, error_code & = throws);
  template<typename... T> void wait_any(error_code &, T &&...);
  template<typename... T> void wait_any(T &&...);
  template<typename InputIter> 
    InputIter wait_any_n(InputIter, std::size_t, error_code & = throws);
}

Function template wait_any

hpx::wait_any

Synopsis

// In header: <hpx/lcos/wait_any.hpp>


template<typename InputIter> 
  void wait_any(InputIter first, InputIter last, error_code & ec = throws);

Description

The function wait_any is a non-deterministic choice operator. It OR-composes all future objects given and returns after one future of that list finishes execution.

[Note]Note

The function wait_any returns after at least one future has become ready. All input futures are still valid after wait_any returns.

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

None of the futures in the input sequence are invalidated.

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

first

[in] The iterator pointing to the first element of a sequence of future or shared_future objects for which wait_any should wait.

last

[in] The iterator pointing to the last element of a sequence of future or shared_future objects for which wait_any should wait.


Function template wait_any

hpx::wait_any

Synopsis

// In header: <hpx/lcos/wait_any.hpp>


template<typename R> 
  void wait_any(std::vector< future< R >> & futures, error_code & ec = throws);

Description

The function wait_any is a non-deterministic choice operator. It OR-composes all future objects given and returns after one future of that list finishes execution.

[Note]Note

The function wait_any returns after at least one future has become ready. All input futures are still valid after wait_any returns.

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

None of the futures in the input sequence are invalidated.

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

futures

[in] A vector holding an arbitrary amount of future or shared_future objects for which wait_any should wait.


Function template wait_any

hpx::wait_any

Synopsis

// In header: <hpx/lcos/wait_any.hpp>


template<typename... T> void wait_any(error_code & ec, T &&... futures);

Description

The function wait_any is a non-deterministic choice operator. It OR-composes all future objects given and returns after one future of that list finishes execution.

[Note]Note

The function wait_any returns after at least one future has become ready. All input futures are still valid after wait_any returns.

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

None of the futures in the input sequence are invalidated.

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

futures

[in] An arbitrary number of future or shared_future objects, possibly holding different types for which wait_any should wait.


Function template wait_any

hpx::wait_any

Synopsis

// In header: <hpx/lcos/wait_any.hpp>


template<typename... T> void wait_any(T &&... futures);

Description

The function wait_any is a non-deterministic choice operator. It OR-composes all future objects given and returns after one future of that list finishes execution.

[Note]Note

The function wait_any returns after at least one future has become ready. All input futures are still valid after wait_any returns.

None of the futures in the input sequence are invalidated.

Parameters:

futures

[in] An arbitrary number of future or shared_future objects, possibly holding different types for which wait_any should wait.


Function template wait_any_n

hpx::wait_any_n

Synopsis

// In header: <hpx/lcos/wait_any.hpp>


template<typename InputIter> 
  InputIter wait_any_n(InputIter first, std::size_t count, 
                       error_code & ec = throws);

Description

The function wait_any_n is a non-deterministic choice operator. It OR-composes all future objects given and returns after one future of that list finishes execution.

[Note]Note

The function wait_any_n returns after at least one future has become ready. All input futures are still valid after wait_any_n returns.

[Note]Note

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

None of the futures in the input sequence are invalidated.

Parameters:

count

[in] The number of elements in the sequence starting at first.

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

first

[in] The iterator pointing to the first element of a sequence of future or shared_future objects for which wait_any_n should wait.

Returns:

The function wait_all_n will return an iterator referring to the first element in the input sequence after the last processed element.

namespace hpx {
  template<typename F, typename Future> 
    void wait_each(F &&, std::vector< Future > &&);
  template<typename F, typename Iterator> 
    void wait_each(F &&, Iterator, Iterator);
  template<typename F, typename... T> void wait_each(F &&, T &&...);
  template<typename F, typename Iterator> 
    void wait_each_n(F &&, Iterator, std::size_t);
}

Function template wait_each

hpx::wait_each

Synopsis

// In header: <hpx/lcos/wait_each.hpp>


template<typename F, typename Future> 
  void wait_each(F && f, std::vector< Future > && futures);

Description

The function wait_each is a operator allowing to join on the results of all given futures. It AND-composes all future objects given and returns after they finished executing. Additionally, the supplied function is called for each of the passed futures as soon as the future has become ready. wait_each returns after all futures have been become ready.

[Note]Note

This function consumes the futures as they are passed on to the supplied function.

Parameters:

f

The function which will be called for each of the input futures once the future has become ready.

futures

A vector holding an arbitrary amount of future or shared_future objects for which wait_each should wait.


Function template wait_each

hpx::wait_each

Synopsis

// In header: <hpx/lcos/wait_each.hpp>


template<typename F, typename Iterator> 
  void wait_each(F && f, Iterator begin, Iterator end);

Description

The function wait_each is a operator allowing to join on the results of all given futures. It AND-composes all future objects given and returns after they finished executing. Additionally, the supplied function is called for each of the passed futures as soon as the future has become ready. wait_each returns after all futures have been become ready.

[Note]Note

This function consumes the futures as they are passed on to the supplied function.

Parameters:

begin

The iterator pointing to the first element of a sequence of future or shared_future objects for which wait_each should wait.

end

The iterator pointing to the last element of a sequence of future or shared_future objects for which wait_each should wait.

f

The function which will be called for each of the input futures once the future has become ready.


Function template wait_each

hpx::wait_each

Synopsis

// In header: <hpx/lcos/wait_each.hpp>


template<typename F, typename... T> void wait_each(F && f, T &&... futures);

Description

The function wait_each is a operator allowing to join on the results of all given futures. It AND-composes all future objects given and returns after they finished executing. Additionally, the supplied function is called for each of the passed futures as soon as the future has become ready. wait_each returns after all futures have been become ready.

[Note]Note

This function consumes the futures as they are passed on to the supplied function.

Parameters:

f

The function which will be called for each of the input futures once the future has become ready.

futures

An arbitrary number of future or shared_future objects, possibly holding different types for which wait_each should wait.


Function template wait_each_n

hpx::wait_each_n

Synopsis

// In header: <hpx/lcos/wait_each.hpp>


template<typename F, typename Iterator> 
  void wait_each_n(F && f, Iterator begin, std::size_t count);

Description

The function wait_each is a operator allowing to join on the result of all given futures. It AND-composes all future objects given and returns after they finished executing. Additionally, the supplied function is called for each of the passed futures as soon as the future has become ready.

[Note]Note

This function consumes the futures as they are passed on to the supplied function.

Parameters:

begin

The iterator pointing to the first element of a sequence of future or shared_future objects for which wait_each_n should wait.

count

The number of elements in the sequence starting at first.

f

The function which will be called for each of the input futures once the future has become ready.

namespace hpx {
  template<typename InputIter> 
    future< vector< future< typename std::iterator_traits< InputIter >::value_type > > > 
    wait_some(std::size_t, Iterator, Iterator, error_code & = throws);
  template<typename R> 
    void wait_some(std::size_t, std::vector< future< R >> &&, 
                   error_code & = throws);
  template<typename... T> 
    void wait_some(std::size_t, T &&..., error_code & = throws);
  template<typename InputIter> 
    InputIter wait_some_n(std::size_t, Iterator, std::size_t, 
                          error_code & = throws);
}

Function template wait_some

hpx::wait_some

Synopsis

// In header: <hpx/lcos/wait_some.hpp>


template<typename InputIter> 
  future< vector< future< typename std::iterator_traits< InputIter >::value_type > > > 
  wait_some(std::size_t n, Iterator first, Iterator last, 
            error_code & ec = throws);

Description

The function wait_some is an operator allowing to join on the result of all given futures. It AND-composes all future objects given and returns a new future object representing the same list of futures after n of them finished executing.

[Note]Note

The future returned by the function wait_some becomes ready when at least n argument futures have become ready.

[Note]Note

Calling this version of wait_some where first == last, returns a future with an empty vector that is immediately ready. Each future and shared_future is waited upon and then copied into the collection of the output (returned) future, maintaining the order of the futures in the input collection. The future returned by wait_some will not throw an exception, but the futures held in the output collection may.

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

first

[in] The iterator pointing to the first element of a sequence of future or shared_future objects for which when_all should wait.

last

[in] The iterator pointing to the last element of a sequence of future or shared_future objects for which when_all should wait.

n

[in] The number of futures out of the arguments which have to become ready in order for the returned future to get ready.

Returns:

Returns a future holding the same list of futures as has been passed to wait_some.

  • future<vector<future<R>>>: If the input cardinality is unknown at compile time and the futures are all of the same type.


Function template wait_some

hpx::wait_some

Synopsis

// In header: <hpx/lcos/wait_some.hpp>


template<typename R> 
  void wait_some(std::size_t n, std::vector< future< R >> && futures, 
                 error_code & ec = throws);

Description

The function wait_some is an operator allowing to join on the result of all given futures. It AND-composes all future objects given and returns a new future object representing the same list of futures after n of them finished executing.

[Note]Note

The function wait_all returns after n futures have become ready. All input futures are still valid after wait_all returns.

Each future and shared_future is waited upon and then copied into the collection of the output (returned) future, maintaining the order of the futures in the input collection. The future returned by wait_some will not throw an exception, but the futures held in the output collection may.

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

futures

[in] A vector holding an arbitrary amount of future or shared_future objects for which wait_some should wait.

n

[in] The number of futures out of the arguments which have to become ready in order for the returned future to get ready.


Function template wait_some

hpx::wait_some

Synopsis

// In header: <hpx/lcos/wait_some.hpp>


template<typename... T> 
  void wait_some(std::size_t n, T &&... futures, error_code & ec = throws);

Description

The function wait_some is an operator allowing to join on the result of all given futures. It AND-composes all future objects given and returns a new future object representing the same list of futures after n of them finished executing.

[Note]Note

The function wait_all returns after n futures have become ready. All input futures are still valid after wait_all returns.

Calling this version of wait_some where first == last, returns a future with an empty vector that is immediately ready. Each future and shared_future is waited upon and then copied into the collection of the output (returned) future, maintaining the order of the futures in the input collection. The future returned by wait_some will not throw an exception, but the futures held in the output collection may.

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

futures

[in] An arbitrary number of future or shared_future objects, possibly holding different types for which wait_some should wait.

n

[in] The number of futures out of the arguments which have to become ready in order for the returned future to get ready.


Function template wait_some_n

hpx::wait_some_n

Synopsis

// In header: <hpx/lcos/wait_some.hpp>


template<typename InputIter> 
  InputIter wait_some_n(std::size_t n, Iterator first, std::size_t count, 
                        error_code & ec = throws);

Description

The function wait_some_n is an operator allowing to join on the result of all given futures. It AND-composes all future objects given and returns a new future object representing the same list of futures after n of them finished executing.

[Note]Note

The function wait_all returns after n futures have become ready. All input futures are still valid after wait_all returns.

[Note]Note

Calling this version of wait_some_n where count == 0, returns a future with the same elements as the arguments that is immediately ready. Possibly none of the futures in that vector are ready. Each future and shared_future is waited upon and then copied into the collection of the output (returned) future, maintaining the order of the futures in the input collection. The future returned by wait_some_n will not throw an exception, but the futures held in the output collection may.

Parameters:

count

[in] The number of elements in the sequence starting at first.

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

first

[in] The iterator pointing to the first element of a sequence of future or shared_future objects for which when_all should wait.

n

[in] The number of futures out of the arguments which have to become ready in order for the returned future to get ready.

Returns:

This function returns an Iterator referring to the first element after the last processed input element.

namespace hpx {
  template<typename InputIter, 
           typename Container = vector<future<typename std::iterator_traits<InputIter>::value_type>> > 
    future< Container > when_all(InputIter, InputIter);
  template<typename Range> future< Range > when_all(Range &&);
  template<typename... T> future< tuple< future< T >...> > when_all(T &&...);
  template<typename InputIter, 
           typename Container = vector<future<typename std::iterator_traits<InputIter>::value_type>> > 
    future< Container > when_all_n(InputIter, std::size_t);
}

Function template when_all

hpx::when_all

Synopsis

// In header: <hpx/lcos/when_all.hpp>


template<typename InputIter, 
         typename Container = vector<future<typename std::iterator_traits<InputIter>::value_type>> > 
  future< Container > when_all(InputIter first, InputIter last);

Description

The function when_all is an operator allowing to join on the result of all given futures. It AND-composes all future objects given and returns a new future object representing the same list of futures after they finished executing.

[Note]Note

Calling this version of when_all where first == last, returns a future with an empty container that is immediately ready. Each future and shared_future is waited upon and then copied into the collection of the output (returned) future, maintaining the order of the futures in the input collection. The future returned by when_all will not throw an exception, but the futures held in the output collection may.

Parameters:

first

[in] The iterator pointing to the first element of a sequence of future or shared_future objects for which when_all should wait.

last

[in] The iterator pointing to the last element of a sequence of future or shared_future objects for which when_all should wait.

Returns:

Returns a future holding the same list of futures as has been passed to when_all.

  • future<Container<future<R>>>: If the input cardinality is unknown at compile time and the futures are all of the same type. The order of the futures in the output container will be the same as given by the input iterator.


Function template when_all

hpx::when_all

Synopsis

// In header: <hpx/lcos/when_all.hpp>


template<typename Range> future< Range > when_all(Range && values);

Description

The function when_all is an operator allowing to join on the result of all given futures. It AND-composes all future objects given and returns a new future object representing the same list of futures after they finished executing.

[Note]Note

Calling this version of when_all where the input container is empty, returns a future with an empty container that is immediately ready. Each future and shared_future is waited upon and then copied into the collection of the output (returned) future, maintaining the order of the futures in the input collection. The future returned by when_all will not throw an exception, but the futures held in the output collection may.

Parameters:

values

[in] A range holding an arbitrary amount of future or shared_future objects for which when_all should wait.

Returns:

Returns a future holding the same list of futures as has been passed to when_all.

  • future<Container<future<R>>>: If the input cardinality is unknown at compile time and the futures are all of the same type.


Function template when_all

hpx::when_all

Synopsis

// In header: <hpx/lcos/when_all.hpp>


template<typename... T> 
  future< tuple< future< T >...> > when_all(T &&... futures);

Description

The function when_all is an operator allowing to join on the result of all given futures. It AND-composes all future objects given and returns a new future object representing the same list of futures after they finished executing.

[Note]Note

Each future and shared_future is waited upon and then copied into the collection of the output (returned) future, maintaining the order of the futures in the input collection. The future returned by when_all will not throw an exception, but the futures held in the output collection may.

Parameters:

futures

[in] An arbitrary number of future or shared_future objects, possibly holding different types for which when_all should wait.

Returns:

Returns a future holding the same list of futures as has been passed to when_all.

  • future<tuple<future<T0>, future<T1>, future<T2>...>>: If inputs are fixed in number and are of heterogeneous types. The inputs can be any arbitrary number of future objects.

  • future<tuple<>> if when_all is called with zero arguments. The returned future will be initially ready.


Function template when_all_n

hpx::when_all_n

Synopsis

// In header: <hpx/lcos/when_all.hpp>


template<typename InputIter, 
         typename Container = vector<future<typename std::iterator_traits<InputIter>::value_type>> > 
  future< Container > when_all_n(InputIter begin, std::size_t count);

Description

The function when_all_n is an operator allowing to join on the result of all given futures. It AND-composes all future objects given and returns a new future object representing the same list of futures after they finished executing.

[Note]Note

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

None of the futures in the input sequence are invalidated.

Parameters:

begin

[in] The iterator pointing to the first element of a sequence of future or shared_future objects for which wait_all_n should wait.

count

[in] The number of elements in the sequence starting at first.

Returns:

Returns a future holding the same list of futures as has been passed to when_all_n.

  • future<Container<future<R>>>: If the input cardinality is unknown at compile time and the futures are all of the same type. The order of the futures in the output vector will be the same as given by the input iterator.

Throws:

This function will throw errors which are encountered while setting up the requested operation only. Errors encountered while executing the operations delivering the results to be stored in the futures are reported through the futures themselves.
namespace hpx {
  template<typename Sequence> struct when_any_result;
  template<typename InputIter, 
           typename Container = vector<future<typename std::iterator_traits<InputIter>::value_type>> > 
    future< when_any_result< Container > > when_any(InputIter, InputIter);
  template<typename Range> 
    future< when_any_result< Range > > when_any(Range &);
  template<typename... T> 
    future< when_any_result< tuple< future< T >...> > > when_any(T &&...);
  template<typename InputIter, 
           typename Container = vector<future<typename std::iterator_traits<InputIter>::value_type>> > 
    future< when_any_result< Container > > when_any_n(InputIter, std::size_t);
}

Struct template when_any_result

hpx::when_any_result

Synopsis

// In header: <hpx/lcos/when_any.hpp>

template<typename Sequence> 
struct when_any_result {

  // public data members
  std::size_t index;  // The index of a future which has become ready. 
  Sequence futures;  // The sequence of futures as passed to hpx::when_any. 
};

Description

Result type for when_any, contains a sequence of futures and an index pointing to a ready future.


Function template when_any

hpx::when_any

Synopsis

// In header: <hpx/lcos/when_any.hpp>


template<typename InputIter, 
         typename Container = vector<future<typename std::iterator_traits<InputIter>::value_type>> > 
  future< when_any_result< Container > > 
  when_any(InputIter first, InputIter last);

Description

The function when_any is a non-deterministic choice operator. It OR-composes all future objects given and returns a new future object representing the same list of futures after one future of that list finishes execution.

Parameters:

first

[in] The iterator pointing to the first element of a sequence of future or shared_future objects for which when_any should wait.

last

[in] The iterator pointing to the last element of a sequence of future or shared_future objects for which when_any should wait.

Returns:

Returns a when_any_result holding the same list of futures as has been passed to when_any and an index pointing to a ready future.

  • future<when_any_result<Container<future<R>>>>: If the input cardinality is unknown at compile time and the futures are all of the same type. The order of the futures in the output container will be the same as given by the input iterator.


Function template when_any

hpx::when_any

Synopsis

// In header: <hpx/lcos/when_any.hpp>


template<typename Range> 
  future< when_any_result< Range > > when_any(Range & values);

Description

The function when_any is a non-deterministic choice operator. It OR-composes all future objects given and returns a new future object representing the same list of futures after one future of that list finishes execution.

Parameters:

values

[in] A range holding an arbitrary amount of futures or shared_future objects for which when_any should wait.

Returns:

Returns a when_any_result holding the same list of futures as has been passed to when_any and an index pointing to a ready future.

  • future<when_any_result<Container<future<R>>>>: If the input cardinality is unknown at compile time and the futures are all of the same type. The order of the futures in the output container will be the same as given by the input iterator.


Function template when_any

hpx::when_any

Synopsis

// In header: <hpx/lcos/when_any.hpp>


template<typename... T> 
  future< when_any_result< tuple< future< T >...> > > 
  when_any(T &&... futures);

Description

The function when_any is a non-deterministic choice operator. It OR-composes all future objects given and returns a new future object representing the same list of futures after one future of that list finishes execution.

Parameters:

futures

[in] An arbitrary number of future or shared_future objects, possibly holding different types for which when_any should wait.

Returns:

Returns a when_any_result holding the same list of futures as has been passed to when_any and an index pointing to a ready future..

  • future<when_any_result<tuple<future<T0>, future<T1>...>>>: If inputs are fixed in number and are of heterogeneous types. The inputs can be any arbitrary number of future objects.

  • future<when_any_result<tuple<>>> if when_any is called with zero arguments. The returned future will be initially ready.


Function template when_any_n

hpx::when_any_n

Synopsis

// In header: <hpx/lcos/when_any.hpp>


template<typename InputIter, 
         typename Container = vector<future<typename std::iterator_traits<InputIter>::value_type>> > 
  future< when_any_result< Container > > 
  when_any_n(InputIter first, std::size_t count);

Description

The function when_any_n is a non-deterministic choice operator. It OR-composes all future objects given and returns a new future object representing the same list of futures after one future of that list finishes execution.

[Note]Note

None of the futures in the input sequence are invalidated.

Parameters:

count

[in] The number of elements in the sequence starting at first.

first

[in] The iterator pointing to the first element of a sequence of future or shared_future objects for which when_any_n should wait.

Returns:

Returns a when_any_result holding the same list of futures as has been passed to when_any and an index pointing to a ready future.

  • future<when_any_result<Container<future<R>>>>: If the input cardinality is unknown at compile time and the futures are all of the same type. The order of the futures in the output container will be the same as given by the input iterator.

namespace hpx {
  template<typename F, typename Future> 
    future< void > when_each(F &&, std::vector< Future > &&);
  template<typename F, typename Iterator> 
    future< Iterator > when_each(F &&, Iterator, Iterator);
  template<typename F, typename... Ts> 
    future< void > when_each(F &&, Ts &&...);
  template<typename F, typename Iterator> 
    future< Iterator > when_each_n(F &&, Iterator, std::size_t);
}

Function template when_each

hpx::when_each

Synopsis

// In header: <hpx/lcos/when_each.hpp>


template<typename F, typename Future> 
  future< void > when_each(F && f, std::vector< Future > && futures);

Description

The function when_each is a operator allowing to join on the results of all given futures. It AND-composes all future objects given and returns a new future object representing the event of all those futures having finished executing. It also calls the supplied callback for each of the futures which becomes ready.

[Note]Note

This function consumes the futures as they are passed on to the supplied function.

Parameters:

f

The function which will be called for each of the input futures once the future has become ready.

futures

A vector holding an arbitrary amount of future or shared_future objects for which wait_each should wait.

Returns:

Returns a future representing the event of all input futures being ready.


Function template when_each

hpx::when_each

Synopsis

// In header: <hpx/lcos/when_each.hpp>


template<typename F, typename Iterator> 
  future< Iterator > when_each(F && f, Iterator begin, Iterator end);

Description

The function when_each is a operator allowing to join on the results of all given futures. It AND-composes all future objects given and returns a new future object representing the event of all those futures having finished executing. It also calls the supplied callback for each of the futures which becomes ready.

[Note]Note

This function consumes the futures as they are passed on to the supplied function.

Parameters:

begin

The iterator pointing to the first element of a sequence of future or shared_future objects for which wait_each should wait.

end

The iterator pointing to the last element of a sequence of future or shared_future objects for which wait_each should wait.

f

The function which will be called for each of the input futures once the future has become ready.

Returns:

Returns a future representing the event of all input futures being ready.


Function template when_each

hpx::when_each

Synopsis

// In header: <hpx/lcos/when_each.hpp>


template<typename F, typename... Ts> 
  future< void > when_each(F && f, Ts &&... futures);

Description

The function when_each is a operator allowing to join on the results of all given futures. It AND-composes all future objects given and returns a new future object representing the event of all those futures having finished executing. It also calls the supplied callback for each of the futures which becomes ready.

[Note]Note

This function consumes the futures as they are passed on to the supplied function.

Parameters:

f

The function which will be called for each of the input futures once the future has become ready.

futures

An arbitrary number of future or shared_future objects, possibly holding different types for which wait_each should wait.

Returns:

Returns a future representing the event of all input futures being ready.


Function template when_each_n

hpx::when_each_n

Synopsis

// In header: <hpx/lcos/when_each.hpp>


template<typename F, typename Iterator> 
  future< Iterator > when_each_n(F && f, Iterator begin, std::size_t count);

Description

The function when_each is a operator allowing to join on the results of all given futures. It AND-composes all future objects given and returns a new future object representing the event of all those futures having finished executing. It also calls the supplied callback for each of the futures which becomes ready.

[Note]Note

This function consumes the futures as they are passed on to the supplied function.

Parameters:

begin

The iterator pointing to the first element of a sequence of future or shared_future objects for which wait_each_n should wait.

count

The number of elements in the sequence starting at first.

f

The function which will be called for each of the input futures once the future has become ready.

Returns:

Returns a future holding the iterator pointing to the first element after the last one.

namespace hpx {
  template<typename Sequence> struct when_some_result;
  template<typename InputIter, 
           typename Container = vector<future<typename std::iterator_traits<InputIter>::value_type>> > 
    future< when_some_result< Container > > 
    when_some(std::size_t, Iterator, Iterator, error_code & = throws);
  template<typename Range> 
    future< when_some_result< Range > > 
    when_some(std::size_t, Range &&, error_code & = throws);
  template<typename... T> 
    future< when_some_result< tuple< future< T >...> > > 
    when_some(std::size_t, error_code &, T &&...);
  template<typename... T> 
    future< when_some_result< tuple< future< T >...> > > 
    when_some(std::size_t, T &&...);
  template<typename InputIter, 
           typename Container = vector<future<typename std::iterator_traits<InputIter>::value_type>> > 
    future< when_some_result< Container > > 
    when_some_n(std::size_t, Iterator, std::size_t, error_code & = throws);
}

Struct template when_some_result

hpx::when_some_result

Synopsis

// In header: <hpx/lcos/when_some.hpp>

template<typename Sequence> 
struct when_some_result {

  // public data members
  std::vector< std::size_t > indices;  // List of indices of futures which became ready. 
  Sequence futures;
};

Description

Result type for when_some, contains a sequence of futures and indices pointing to ready futures.

when_some_result public public data members

  1. std::vector< std::size_t > indices;

    The sequence of futures as passed to hpx::when_some


Function template when_some

hpx::when_some

Synopsis

// In header: <hpx/lcos/when_some.hpp>


template<typename InputIter, 
         typename Container = vector<future<typename std::iterator_traits<InputIter>::value_type>> > 
  future< when_some_result< Container > > 
  when_some(std::size_t n, Iterator first, Iterator last, 
            error_code & ec = throws);

Description

The function when_some is an operator allowing to join on the result of all given futures. It AND-composes all future objects given and returns a new future object representing the same list of futures after n of them finished executing.

[Note]Note

The future returned by the function when_some becomes ready when at least n argument futures have become ready.

[Note]Note

Calling this version of when_some where first == last, returns a future with an empty container that is immediately ready. Each future and shared_future is waited upon and then copied into the collection of the output (returned) future, maintaining the order of the futures in the input collection. The future returned by when_some will not throw an exception, but the futures held in the output collection may.

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

first

[in] The iterator pointing to the first element of a sequence of future or shared_future objects for which when_all should wait.

last

[in] The iterator pointing to the last element of a sequence of future or shared_future objects for which when_all should wait.

n

[in] The number of futures out of the arguments which have to become ready in order for the returned future to get ready.

Returns:

Returns a when_some_result holding the same list of futures as has been passed to when_some and indices pointing to ready futures.

  • future<when_some_result<Container<future<R>>>>: If the input cardinality is unknown at compile time and the futures are all of the same type. The order of the futures in the output container will be the same as given by the input iterator.


Function template when_some

hpx::when_some

Synopsis

// In header: <hpx/lcos/when_some.hpp>


template<typename Range> 
  future< when_some_result< Range > > 
  when_some(std::size_t n, Range && futures, error_code & ec = throws);

Description

The function when_some is an operator allowing to join on the result of all given futures. It AND-composes all future objects given and returns a new future object representing the same list of futures after n of them finished executing.

[Note]Note

The future returned by the function when_some becomes ready when at least n argument futures have become ready.

[Note]Note

Each future and shared_future is waited upon and then copied into the collection of the output (returned) future, maintaining the order of the futures in the input collection. The future returned by when_some will not throw an exception, but the futures held in the output collection may.

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

futures

[in] A container holding an arbitrary amount of future or shared_future objects for which when_some should wait.

n

[in] The number of futures out of the arguments which have to become ready in order for the returned future to get ready.

Returns:

Returns a when_some_result holding the same list of futures as has been passed to when_some and indices pointing to ready futures.

  • future<when_some_result<Container<future<R>>>>: If the input cardinality is unknown at compile time and the futures are all of the same type. The order of the futures in the output container will be the same as given by the input iterator.


Function template when_some

hpx::when_some

Synopsis

// In header: <hpx/lcos/when_some.hpp>


template<typename... T> 
  future< when_some_result< tuple< future< T >...> > > 
  when_some(std::size_t n, error_code & ec, T &&... futures);

Description

The function when_some is an operator allowing to join on the result of all given futures. It AND-composes all future objects given and returns a new future object representing the same list of futures after n of them finished executing.

[Note]Note

The future returned by the function when_some becomes ready when at least n argument futures have become ready.

[Note]Note

Each future and shared_future is waited upon and then copied into the collection of the output (returned) future, maintaining the order of the futures in the input collection. The future returned by when_some will not throw an exception, but the futures held in the output collection may.

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

futures

[in] An arbitrary number of future or shared_future objects, possibly holding different types for which when_some should wait.

n

[in] The number of futures out of the arguments which have to become ready in order for the returned future to get ready.

Returns:

Returns a when_some_result holding the same list of futures as has been passed to when_some and an index pointing to a ready future..

  • future<when_some_result<tuple<future<T0>, future<T1>...>>>: If inputs are fixed in number and are of heterogeneous types. The inputs can be any arbitrary number of future objects.

  • future<when_some_result<tuple<>>> if when_some is called with zero arguments. The returned future will be initially ready.


Function template when_some

hpx::when_some

Synopsis

// In header: <hpx/lcos/when_some.hpp>


template<typename... T> 
  future< when_some_result< tuple< future< T >...> > > 
  when_some(std::size_t n, T &&... futures);

Description

The function when_some is an operator allowing to join on the result of all given futures. It AND-composes all future objects given and returns a new future object representing the same list of futures after n of them finished executing.

[Note]Note

The future returned by the function when_some becomes ready when at least n argument futures have become ready.

[Note]Note

Each future and shared_future is waited upon and then copied into the collection of the output (returned) future, maintaining the order of the futures in the input collection. The future returned by when_some will not throw an exception, but the futures held in the output collection may.

Parameters:

futures

[in] An arbitrary number of future or shared_future objects, possibly holding different types for which when_some should wait.

n

[in] The number of futures out of the arguments which have to become ready in order for the returned future to get ready.

Returns:

Returns a when_some_result holding the same list of futures as has been passed to when_some and an index pointing to a ready future..

  • future<when_some_result<tuple<future<T0>, future<T1>...>>>: If inputs are fixed in number and are of heterogeneous types. The inputs can be any arbitrary number of future objects.

  • future<when_some_result<tuple<>>> if when_some is called with zero arguments. The returned future will be initially ready.


Function template when_some_n

hpx::when_some_n

Synopsis

// In header: <hpx/lcos/when_some.hpp>


template<typename InputIter, 
         typename Container = vector<future<typename std::iterator_traits<InputIter>::value_type>> > 
  future< when_some_result< Container > > 
  when_some_n(std::size_t n, Iterator first, std::size_t count, 
              error_code & ec = throws);

Description

The function when_some_n is an operator allowing to join on the result of all given futures. It AND-composes all future objects given and returns a new future object representing the same list of futures after n of them finished executing.

[Note]Note

The future returned by the function when_some_n becomes ready when at least n argument futures have become ready.

[Note]Note

Calling this version of when_some_n where count == 0, returns a future with the same elements as the arguments that is immediately ready. Possibly none of the futures in that container are ready. Each future and shared_future is waited upon and then copied into the collection of the output (returned) future, maintaining the order of the futures in the input collection. The future returned by when_some_n will not throw an exception, but the futures held in the output collection may.

Parameters:

count

[in] The number of elements in the sequence starting at first.

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

first

[in] The iterator pointing to the first element of a sequence of future or shared_future objects for which when_all should wait.

n

[in] The number of futures out of the arguments which have to become ready in order for the returned future to get ready.

Returns:

Returns a when_some_result holding the same list of futures as has been passed to when_some and indices pointing to ready futures.

  • future<when_some_result<Container<future<R>>>>: If the input cardinality is unknown at compile time and the futures are all of the same type. The order of the futures in the output container will be the same as given by the input iterator.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter, typename OutIter> 
        unspecified adjacent_difference(ExPolicy &&, InIter, InIter, OutIter);
      template<typename ExPolicy, typename InIter, typename OutIter, 
               typename Op> 
        unspecified adjacent_difference(ExPolicy &&, InIter, InIter, OutIter, 
                                        Op &&);
    }
  }
}

Function template adjacent_difference

hpx::parallel::v1::adjacent_difference

Synopsis

// In header: <hpx/parallel/algorithms/adjacent_difference.hpp>


template<typename ExPolicy, typename InIter, typename OutIter> 
  unspecified adjacent_difference(ExPolicy && policy, InIter first, 
                                  InIter last, OutIter dest);

Description

Assigns each value in the range given by result its corresponding element in the range [first, last] and the one preceeding it except *result, which is assigned *first

[Note]Note

Complexity: Exactly (last - first) - 1 application of the binary operator and (last - first) assignments.

The difference operations in the parallel adjacent_difference invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The difference operations in the parallel adjacent_difference invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

This overload of adjacent_find is available if the user decides to provide their algorithm their own binary predicate op.

Parameters:

dest

Refers to the beginning of the sequence of elements the results will be assigned to.

first

Refers to the beginning of the sequence of elements of the range the algorithm will be applied to.

last

Refers to the end of the sequence of elements of the range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter

The type of the source iterators used for the input range (deduced). This iterator type must meet the requirements of an input iterator.

OutIter

The type of the source iterators used for the output range (deduced). This iterator type must meet the requirements of an output iterator.

Returns:

The adjacent_difference algorithm returns a hpx::future<OutIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns OutIter otherwise. The adjacent_find algorithm returns an iterator to the last element in the output range.


Function template adjacent_difference

hpx::parallel::v1::adjacent_difference

Synopsis

// In header: <hpx/parallel/algorithms/adjacent_difference.hpp>


template<typename ExPolicy, typename InIter, typename OutIter, typename Op> 
  unspecified adjacent_difference(ExPolicy && policy, InIter first, 
                                  InIter last, OutIter dest, Op && op);

Description

Assigns each value in the range given by result its corresponding element in the range [first, last] and the one preceeding it except *result, which is assigned *first

[Note]Note

Complexity: Exactly (last - first) - 1 application of the binary operator and (last - first) assignments.

The difference operations in the parallel adjacent_difference invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The difference operations in the parallel adjacent_difference invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest

Refers to the beginning of the sequence of elements the results will be assigned to.

first

Refers to the beginning of the sequence of elements of the range the algorithm will be applied to.

last

Refers to the end of the sequence of elements of the range the algorithm will be applied to.

op

The binary operator which returns the difference of elements. The signature should be equivalent to the following:

bool op(const Type1 &a, const Type1 &b);


The signature does not need to have const &, but the function must not modify the objects passed to it. The types Type1 must be such that objects of type InIter can be dereferenced and then implicitly converted to the dereferenced type of dest.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter

The type of the source iterators used for the input range (deduced). This iterator type must meet the requirements of an input iterator.

Op

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of adjacent_difference requires Op to meet the requirements of CopyConstructible.

OutIter

The type of the source iterators used for the output range (deduced). This iterator type must meet the requirements of an output iterator.

Returns:

The adjacent_difference algorithm returns a hpx::future<OutIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns OutIter otherwise. The adjacent_find algorithm returns an iterator to the last element in the output range.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename FwdIter> 
        unspecified adjacent_find(ExPolicy &&, FwdIter, FwdIter);
      template<typename ExPolicy, typename FwdIter, typename Pred> 
        unspecified adjacent_find(ExPolicy &&, FwdIter, FwdIter, Pred &&);
    }
  }
}

Function template adjacent_find

hpx::parallel::v1::adjacent_find

Synopsis

// In header: <hpx/parallel/algorithms/adjacent_find.hpp>


template<typename ExPolicy, typename FwdIter> 
  unspecified adjacent_find(ExPolicy && policy, FwdIter first, FwdIter last);

Description

Searches the range [first, last) for two consecutive identical elements. This version uses operator== to compare the elements

[Note]Note

Complexity: Exactly the smaller of (result - first) + 1 and (last - first) - 1 applications of operator== where result is the return value

The comparison operations in the parallel adjacent_find algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparison operations in the parallel adjacent_find algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

first

Refers to the beginning of the sequence of elements of the range the algorithm will be applied to.

last

Refers to the end of the sequence of elements of the range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

FwdIter

The type of the source iterators used for the range (deduced). This iterator type must meet the requirements of an forward iterator.

Returns:

The adjacent_find algorithm returns a hpx::future<FwdIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns FwdIter otherwise. The adjacent_find algorithm returns an iterator to first of the identical elements. If no such elements are found, last is returned


Function template adjacent_find

hpx::parallel::v1::adjacent_find

Synopsis

// In header: <hpx/parallel/algorithms/adjacent_find.hpp>


template<typename ExPolicy, typename FwdIter, typename Pred> 
  unspecified adjacent_find(ExPolicy && policy, FwdIter first, FwdIter last, 
                            Pred && op);

Description

Searches the range [first, last) for two consecutive identical elements. This version uses the given binary predicate op

[Note]Note

Complexity: Exactly the smaller of (result - first) + 1 and (last - first) - 1 application of the predicate where result is the value returned

The comparison operations in the parallel adjacent_find invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparison operations in the parallel adjacent_find invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

This overload of adjacent_find is available if the user decides to provide their algorithm their own binary predicate op.

Parameters:

first

Refers to the beginning of the sequence of elements of the range the algorithm will be applied to.

last

Refers to the end of the sequence of elements of the range the algorithm will be applied to.

op

The binary predicate which returns true if the elements should be treated as equal. The signature should be equivalent to the following:

bool pred(const Type1 &a, const Type1 &b);


The signature does not need to have const &, but the function must not modify the objects passed to it. The types Type1 must be such that objects of type FwdIter can be dereferenced and then implicitly converted to Type1 .

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

FwdIter

The type of the source iterators used for the range (deduced). This iterator type must meet the requirements of an forward iterator.

Pred

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of adjacent_find requires Pred to meet the requirements of CopyConstructible.

Returns:

The adjacent_find algorithm returns a hpx::future<InIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns InIter otherwise. The adjacent_find algorithm returns an iterator to the first of the identical elements. If no such elements are found, last is returned.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter, typename F> 
        unspecified none_of(ExPolicy &&, InIter, InIter, F &&);
      template<typename ExPolicy, typename InIter, typename F> 
        unspecified any_of(ExPolicy &&, InIter, InIter, F &&);
      template<typename ExPolicy, typename InIter, typename F> 
        unspecified all_of(ExPolicy &&, InIter, InIter, F &&);
    }
  }
}

Function template none_of

hpx::parallel::v1::none_of

Synopsis

// In header: <hpx/parallel/algorithms/all_any_none.hpp>


template<typename ExPolicy, typename InIter, typename F> 
  unspecified none_of(ExPolicy && policy, InIter first, InIter last, F && f);

Description

Checks if unary predicate f returns true for no elements in the range [first, last).

[Note]Note

Complexity: At most last - first applications of the predicate f

The application of function objects in parallel algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The application of function objects in parallel algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

f

Specifies the function (or function object) which will be invoked for each of the elements in the sequence specified by [first, last). The signature of this predicate should be equivalent to:

bool pred(const Type &a);


The signature does not need to have const&, but the function must not modify the objects passed to it. The type Type must be such that an object of type InIter can be dereferenced and then implicitly converted to Type.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it applies user-provided function objects.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of none_of requires F to meet the requirements of CopyConstructible.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

Returns:

The none_of algorithm returns a hpx::future<bool> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns bool otherwise. The none_of algorithm returns true if the unary predicate f returns true for no elements in the range, false otherwise. It returns true if the range is empty.


Function template any_of

hpx::parallel::v1::any_of

Synopsis

// In header: <hpx/parallel/algorithms/all_any_none.hpp>


template<typename ExPolicy, typename InIter, typename F> 
  unspecified any_of(ExPolicy && policy, InIter first, InIter last, F && f);

Description

Checks if unary predicate f returns true for at least one element in the range [first, last).

[Note]Note

Complexity: At most last - first applications of the predicate f

The application of function objects in parallel algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The application of function objects in parallel algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

f

Specifies the function (or function object) which will be invoked for each of the elements in the sequence specified by [first, last). The signature of this predicate should be equivalent to:

bool pred(const Type &a);


The signature does not need to have const&, but the function must not modify the objects passed to it. The type Type must be such that an object of type InIter can be dereferenced and then implicitly converted to Type.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it applies user-provided function objects.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of any_of requires F to meet the requirements of CopyConstructible.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

Returns:

The any_of algorithm returns a hpx::future<bool> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns bool otherwise. The any_of algorithm returns true if the unary predicate f returns true for at least one element in the range, false otherwise. It returns false if the range is empty.


Function template all_of

hpx::parallel::v1::all_of

Synopsis

// In header: <hpx/parallel/algorithms/all_any_none.hpp>


template<typename ExPolicy, typename InIter, typename F> 
  unspecified all_of(ExPolicy && policy, InIter first, InIter last, F && f);

Description

Checks if unary predicate f returns true for all elements in the range [first, last).

[Note]Note

Complexity: At most last - first applications of the predicate f

The application of function objects in parallel algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The application of function objects in parallel algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

f

Specifies the function (or function object) which will be invoked for each of the elements in the sequence specified by [first, last). The signature of this predicate should be equivalent to:

bool pred(const Type &a);


The signature does not need to have const&, but the function must not modify the objects passed to it. The type Type must be such that an object of type InIter can be dereferenced and then implicitly converted to Type.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it applies user-provided function objects.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of all_of requires F to meet the requirements of CopyConstructible.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

Returns:

The all_of algorithm returns a hpx::future<bool> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns bool otherwise. The all_of algorithm returns true if the unary predicate f returns true for all elements in the range, false otherwise. It returns true if the range is empty.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter, typename OutIter, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< InIter >::value &&traits::is_iterator< OutIter >::value) > 
        unspecified copy(ExPolicy &&, InIter, InIter, OutIter);
      template<typename ExPolicy, typename InIter, typename Size, 
               typename OutIter, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< InIter >::value &&traits::is_iterator< OutIter >::value) > 
        unspecified copy_n(ExPolicy &&, InIter, Size, OutIter);
      template<typename ExPolicy, typename InIter, typename OutIter, 
               typename F, typename Proj = util::projection_identity, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< InIter >::value &&traits::is_iterator< OutIter >::value &&traits::is_projected< Proj, InIter >::value &&traits::is_indirect_callable< F, traits::projected< Proj, InIter > >::value) > 
        unspecified copy_if(ExPolicy &&, InIter, InIter, OutIter, F &&, 
                            Proj && = Proj());
    }
  }
}

Function template copy

hpx::parallel::v1::copy

Synopsis

// In header: <hpx/parallel/algorithms/copy.hpp>


template<typename ExPolicy, typename InIter, typename OutIter, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< InIter >::value &&traits::is_iterator< OutIter >::value) > 
  unspecified copy(ExPolicy && policy, InIter first, InIter last, 
                   OutIter dest);

Description

Copies the elements in the range, defined by [first, last), to another range beginning at dest.

[Note]Note

Complexity: Performs exactly last - first assignments.

The assignments in the parallel copy algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel copy algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest

Refers to the beginning of the destination range.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Returns:

The copy algorithm returns a hpx::future<tagged_pair<tag::in(InIter), tag::out(OutIter)> > if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns tagged_pair<tag::in(InIter), tag::out(OutIter)> otherwise. The copy algorithm returns the pair of the input iterator last and the output iterator to the element in the destination range, one past the last element copied.


Function template copy_n

hpx::parallel::v1::copy_n

Synopsis

// In header: <hpx/parallel/algorithms/copy.hpp>


template<typename ExPolicy, typename InIter, typename Size, typename OutIter, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< InIter >::value &&traits::is_iterator< OutIter >::value) > 
  unspecified copy_n(ExPolicy && policy, InIter first, Size count, 
                     OutIter dest);

Description

Copies the elements in the range [first, first + count), starting from first and proceeding to first + count - 1., to another range beginning at dest.

[Note]Note

Complexity: Performs exactly count assignments, if count > 0, no assignments otherwise.

The assignments in the parallel copy_n algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel copy_n algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

count

Refers to the number of elements starting at first the algorithm will be applied to.

dest

Refers to the beginning of the destination range.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Size

The type of the argument specifying the number of elements to apply f to.

Returns:

The copy_n algorithm returns a hpx::future<tagged_pair<tag::in(InIter), tag::out(OutIter)> > if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns tagged_pair<tag::in(InIter), tag::out(OutIter)> otherwise. The copy algorithm returns the pair of the input iterator forwarded to the first element after the last in the input sequence and the output iterator to the element in the destination range, one past the last element copied.


Function template copy_if

hpx::parallel::v1::copy_if

Synopsis

// In header: <hpx/parallel/algorithms/copy.hpp>


template<typename ExPolicy, typename InIter, typename OutIter, typename F, 
         typename Proj = util::projection_identity, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< InIter >::value &&traits::is_iterator< OutIter >::value &&traits::is_projected< Proj, InIter >::value &&traits::is_indirect_callable< F, traits::projected< Proj, InIter > >::value) > 
  unspecified copy_if(ExPolicy && policy, InIter first, InIter last, 
                      OutIter dest, F && f, Proj && proj = Proj());

Description

Copies the elements in the range, defined by [first, last), to another range beginning at dest. Copies only the elements for which the predicate f returns true. The order of the elements that are not removed is preserved.

[Note]Note

Complexity: Performs not more than last - first assignments, exactly last - first applications of the predicate f.

The assignments in the parallel copy_if algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel copy_if algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest

Refers to the beginning of the destination range.

f

Specifies the function (or function object) which will be invoked for each of the elements in the sequence specified by [first, last).This is an unary predicate which returns true for the required elements. The signature of this predicate should be equivalent to:

bool pred(const Type &a);


The signature does not need to have const&, but the function must not modify the objects passed to it. The type Type must be such that an object of type InIter can be dereferenced and then implicitly converted to Type.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

proj

Specifies the function (or function object) which will be invoked for each of the elements as a projection operation before the actual predicate is invoked.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of copy_if requires F to meet the requirements of CopyConstructible.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Proj

The type of an optional projection function. This defaults to util::projection_identity

Returns:

The copy_if algorithm returns a hpx::future<tagged_pair<tag::in(InIter), tag::out(OutIter)> > if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns tagged_pair<tag::in(InIter), tag::out(OutIter)> otherwise. The copy algorithm returns the pair of the input iterator forwarded to the first element after the last in the input sequence and the output iterator to the element in the destination range, one past the last element copied.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename Rng, typename OutIter, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_iterator< OutIter >::value) > 
        unspecified copy(ExPolicy &&, Rng &&, OutIter);
      template<typename ExPolicy, typename Rng, typename OutIter, typename F, 
               typename Proj = util::projection_identity, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_projected_range< Proj, Rng >::value &&traits::is_iterator< OutIter >::value &&traits::is_indirect_callable< F, traits::projected_range< Proj, Rng > >::value) > 
        unspecified copy_if(ExPolicy &&, Rng &&, OutIter, F &&, 
                            Proj && = Proj());
    }
  }
}

Function template copy

hpx::parallel::v1::copy

Synopsis

// In header: <hpx/parallel/container_algorithms/copy.hpp>


template<typename ExPolicy, typename Rng, typename OutIter, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_iterator< OutIter >::value) > 
  unspecified copy(ExPolicy && policy, Rng && rng, OutIter dest);

Description

Copies the elements in the range, defined by [first, last), to another range beginning at dest.

[Note]Note

Complexity: Performs exactly last - first assignments.

The assignments in the parallel copy algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel copy algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest

Refers to the beginning of the destination range.

policy

The execution policy to use for the scheduling of the iterations.

rng

Refers to the sequence of elements the algorithm will be applied to.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Rng

The type of the source range used (deduced). The iterators extracted from this range type must meet the requirements of an input iterator.

Returns:

The copy algorithm returns a hpx::future<OutIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns OutIter otherwise. The copy algorithm returns the output iterator to the element in the destination range, one past the last element copied.


Function template copy_if

hpx::parallel::v1::copy_if

Synopsis

// In header: <hpx/parallel/container_algorithms/copy.hpp>


template<typename ExPolicy, typename Rng, typename OutIter, typename F, 
         typename Proj = util::projection_identity, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_projected_range< Proj, Rng >::value &&traits::is_iterator< OutIter >::value &&traits::is_indirect_callable< F, traits::projected_range< Proj, Rng > >::value) > 
  unspecified copy_if(ExPolicy && policy, Rng && rng, OutIter dest, F && f, 
                      Proj && proj = Proj());

Description

Copies the elements in the range, defined by [first, last), to another range beginning at dest. Copies only the elements for which the predicate f returns true. The order of the elements that are not removed is preserved.

[Note]Note

Complexity: Performs not more than last - first assignments, exactly last - first applications of the predicate f.

The assignments in the parallel copy_if algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel copy_if algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest

Refers to the beginning of the destination range.

f

Specifies the function (or function object) which will be invoked for each of the elements in the sequence specified by [first, last).This is an unary predicate which returns true for the required elements. The signature of this predicate should be equivalent to:

bool pred(const Type &a);


The signature does not need to have const&, but the function must not modify the objects passed to it. The type Type must be such that an object of type InIter can be dereferenced and then implicitly converted to Type.

policy

The execution policy to use for the scheduling of the iterations.

proj

Specifies the function (or function object) which will be invoked for each of the elements as a projection operation before the actual predicate is invoked.

rng

Refers to the sequence of elements the algorithm will be applied to.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of copy_if requires F to meet the requirements of CopyConstructible.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Proj

The type of an optional projection function. This defaults to util::projection_identity

Rng

The type of the source range used (deduced). The iterators extracted from this range type must meet the requirements of an input iterator.

Returns:

The copy_if algorithm returns a hpx::future<OutIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns OutIter otherwise. The copy_if algorithm returns the output iterator to the element in the destination range, one past the last element copied.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter, typename T> 
        unspecified count(ExPolicy &&, InIter, InIter, T const &);
      template<typename ExPolicy, typename InIter, typename F> 
        unspecified count_if(ExPolicy &&, InIter, InIter, F &&);
    }
  }
}

Function template count

hpx::parallel::v1::count

Synopsis

// In header: <hpx/parallel/algorithms/count.hpp>


template<typename ExPolicy, typename InIter, typename T> 
  unspecified count(ExPolicy && policy, InIter first, InIter last, 
                    T const & value);

Description

Returns the number of elements in the range [first, last) satisfying a specific criteria. This version counts the elements that are equal to the given value.

[Note]Note

Complexity: Performs exactly last - first comparisons.

The comparisons in the parallel count algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

[Note]Note

The comparisons in the parallel count algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

value

The value to search for.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the comparisons.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

T

The type of the value to search for (deduced).

Returns:

The count algorithm returns a hpx::future<difference_type> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns difference_type otherwise (where difference_type is defined by std::iterator_traits<InIter>::difference_type. The count algorithm returns the number of elements satisfying the given criteria.


Function template count_if

hpx::parallel::v1::count_if

Synopsis

// In header: <hpx/parallel/algorithms/count.hpp>


template<typename ExPolicy, typename InIter, typename F> 
  unspecified count_if(ExPolicy && policy, InIter first, InIter last, F && f);

Description

Returns the number of elements in the range [first, last) satisfying a specific criteria. This version counts elements for which predicate f returns true.

[Note]Note

Complexity: Performs exactly last - first applications of the predicate.

[Note]Note

The assignments in the parallel count_if algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel count_if algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

f

Specifies the function (or function object) which will be invoked for each of the elements in the sequence specified by [first, last).This is an unary predicate which returns true for the required elements. The signature of this predicate should be equivalent to:

bool pred(const Type &a);


The signature does not need to have const&, but the function must not modify the objects passed to it. The type Type must be such that an object of type InIter can be dereferenced and then implicitly converted to Type.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the comparisons.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of count_if requires F to meet the requirements of CopyConstructible.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

Returns:

The count_if algorithm returns hpx::future<difference_type> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns difference_type otherwise (where difference_type is defined by std::iterator_traits<InIter>::difference_type. The count algorithm returns the number of elements satisfying the given criteria.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter1, typename InIter2> 
        unspecified equal(ExPolicy &&, InIter1, InIter1, InIter2, InIter2);
      template<typename ExPolicy, typename InIter1, typename InIter2, 
               typename F> 
        unspecified equal(ExPolicy &&, InIter1, InIter1, InIter2, InIter2, 
                          F &&);
      template<typename ExPolicy, typename InIter1, typename InIter2> 
        unspecified equal(ExPolicy &&, InIter1, InIter1, InIter2);
      template<typename ExPolicy, typename InIter1, typename InIter2, 
               typename F> 
        unspecified equal(ExPolicy &&, InIter1, InIter1, InIter2, F &&);
    }
  }
}

Function template equal

hpx::parallel::v1::equal

Synopsis

// In header: <hpx/parallel/algorithms/equal.hpp>


template<typename ExPolicy, typename InIter1, typename InIter2> 
  unspecified equal(ExPolicy && policy, InIter1 first1, InIter1 last1, 
                    InIter2 first2, InIter2 last2);

Description

Returns true if the range [first1, last1) is equal to the range [first2, last2), and false otherwise.

[Note]Note

Complexity: At most min(last1 - first1, last2 - first2) applications of the operator==().

The comparison operations in the parallel equal algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparison operations in the parallel equal algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

[Note]Note

The two ranges are considered equal if, for every iterator i in the range [first1,last1), *i equals *(first2 + (i - first1)). This overload of equal uses operator== to determine if two elements are equal.

Parameters:

first1

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

first2

Refers to the beginning of the sequence of elements of the second range the algorithm will be applied to.

last1

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

last2

Refers to the end of the sequence of elements of the second range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter1

The type of the source iterators used for the first range (deduced). This iterator type must meet the requirements of an input iterator.

InIter2

The type of the source iterators used for the second range (deduced). This iterator type must meet the requirements of an input iterator.

Returns:

The equal algorithm returns a hpx::future<bool> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns bool otherwise. The equal algorithm returns true if the elements in the two ranges are equal, otherwise it returns false. If the length of the range [first1, last1) does not equal the length of the range [first2, last2), it returns false.


Function template equal

hpx::parallel::v1::equal

Synopsis

// In header: <hpx/parallel/algorithms/equal.hpp>


template<typename ExPolicy, typename InIter1, typename InIter2, typename F> 
  unspecified equal(ExPolicy && policy, InIter1 first1, InIter1 last1, 
                    InIter2 first2, InIter2 last2, F && f);

Description

Returns true if the range [first1, last1) is equal to the range [first2, last2), and false otherwise.

[Note]Note

Complexity: At most min(last1 - first1, last2 - first2) applications of the predicate f.

The comparison operations in the parallel equal algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparison operations in the parallel equal algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

[Note]Note

The two ranges are considered equal if, for every iterator i in the range [first1,last1), *i equals *(first2 + (i - first1)). This overload of equal uses operator== to determine if two elements are equal.

Parameters:

f

The binary predicate which returns true if the elements should be treated as equal. The signature of the predicate function should be equivalent to the following:

bool pred(const Type1 &a, const Type2 &b);


The signature does not need to have const &, but the function must not modify the objects passed to it. The types Type1 and Type2 must be such that objects of types InIter1 and InIter2 can be dereferenced and then implicitly converted to Type1 and Type2 respectively

first1

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

first2

Refers to the beginning of the sequence of elements of the second range the algorithm will be applied to.

last1

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

last2

Refers to the end of the sequence of elements of the second range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of equal requires F to meet the requirements of CopyConstructible.

InIter1

The type of the source iterators used for the first range (deduced). This iterator type must meet the requirements of an input iterator.

InIter2

The type of the source iterators used for the second range (deduced). This iterator type must meet the requirements of an input iterator.

Returns:

The equal algorithm returns a hpx::future<bool> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns bool otherwise. The equal algorithm returns true if the elements in the two ranges are equal, otherwise it returns false. If the length of the range [first1, last1) does not equal the length of the range [first2, last2), it returns false.


Function template equal

hpx::parallel::v1::equal

Synopsis

// In header: <hpx/parallel/algorithms/equal.hpp>


template<typename ExPolicy, typename InIter1, typename InIter2> 
  unspecified equal(ExPolicy && policy, InIter1 first1, InIter1 last1, 
                    InIter2 first2);

Description

Returns true if the range [first1, last1) is equal to the range starting at first2, and false otherwise.

[Note]Note

Complexity: At most last1 - first1 applications of the operator==().

The comparison operations in the parallel equal algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparison operations in the parallel equal algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

[Note]Note

The two ranges are considered equal if, for every iterator i in the range [first1,last1), *i equals *(first2 + (i - first1)). This overload of equal uses operator== to determine if two elements are equal.

Parameters:

first1

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

first2

Refers to the beginning of the sequence of elements of the second range the algorithm will be applied to.

last1

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter1

The type of the source iterators used for the first range (deduced). This iterator type must meet the requirements of an input iterator.

InIter2

The type of the source iterators used for the second range (deduced). This iterator type must meet the requirements of an input iterator.

Returns:

The equal algorithm returns a hpx::future<bool> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns bool otherwise. The equal algorithm returns true if the elements in the two ranges are equal, otherwise it returns false.


Function template equal

hpx::parallel::v1::equal

Synopsis

// In header: <hpx/parallel/algorithms/equal.hpp>


template<typename ExPolicy, typename InIter1, typename InIter2, typename F> 
  unspecified equal(ExPolicy && policy, InIter1 first1, InIter1 last1, 
                    InIter2 first2, F && f);

Description

Returns true if the range [first1, last1) is equal to the range starting at first2, and false otherwise.

[Note]Note

Complexity: At most last1 - first1 applications of the predicate f.

The comparison operations in the parallel equal algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparison operations in the parallel equal algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

[Note]Note

The two ranges are considered equal if, for every iterator i in the range [first1,last1), *i equals *(first2 + (i - first1)). This overload of equal uses operator== to determine if two elements are equal.

Parameters:

f

The binary predicate which returns true if the elements should be treated as equal. The signature of the predicate function should be equivalent to the following:

bool pred(const Type1 &a, const Type2 &b);


The signature does not need to have const &, but the function must not modify the objects passed to it. The types Type1 and Type2 must be such that objects of types InIter1 and InIter2 can be dereferenced and then implicitly converted to Type1 and Type2 respectively

first1

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

first2

Refers to the beginning of the sequence of elements of the second range the algorithm will be applied to.

last1

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of equal requires F to meet the requirements of CopyConstructible.

InIter1

The type of the source iterators used for the first range (deduced). This iterator type must meet the requirements of an input iterator.

InIter2

The type of the source iterators used for the second range (deduced). This iterator type must meet the requirements of an input iterator.

Returns:

The equal algorithm returns a hpx::future<bool> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns bool otherwise. The equal algorithm returns true if the elements in the two ranges are equal, otherwise it returns false.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter, typename OutIter, 
               typename T, typename Op> 
        unspecified exclusive_scan(ExPolicy &&, InIter, InIter, OutIter, T, 
                                   Op &&);
      template<typename ExPolicy, typename InIter, typename OutIter, 
               typename T> 
        unspecified exclusive_scan(ExPolicy &&, InIter, InIter, OutIter, T);
    }
  }
}

Function template exclusive_scan

hpx::parallel::v1::exclusive_scan

Synopsis

// In header: <hpx/parallel/algorithms/exclusive_scan.hpp>


template<typename ExPolicy, typename InIter, typename OutIter, typename T, 
         typename Op> 
  unspecified exclusive_scan(ExPolicy && policy, InIter first, InIter last, 
                             OutIter dest, T init, Op && op);

Description

Assigns through each iterator i in [result, result + (last - first)) the value of GENERALIZED_NONCOMMUTATIVE_SUM(binary_op, init, *first, ..., *(first + (i - result) - 1)).

[Note]Note

Complexity: O(last - first) applications of the predicate op.

The reduce operations in the parallel exclusive_scan algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The reduce operations in the parallel exclusive_scan algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

[Note]Note

GENERALIZED_NONCOMMUTATIVE_SUM(op, a1, ..., aN) is defined as:

  • a1 when N is 1

  • op(GENERALIZED_NONCOMMUTATIVE_SUM(op, a1, ..., aK), GENERALIZED_NONCOMMUTATIVE_SUM(op, aM, ..., aN)) where 1 < K+1 = M <= N.

The difference between exclusive_scan and inclusive_scan is that inclusive_scan includes the ith input element in the ith sum. If op is not mathematically associative, the behavior of inclusive_scan may be non-deterministic.

Parameters:

dest

Refers to the beginning of the destination range.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

init

The initial value for the generalized sum.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

op

Specifies the function (or function object) which will be invoked for each of the values of the input sequence. This is a binary predicate. The signature of this predicate should be equivalent to:

Ret fun(const Type1 &a, const Type1 &b);


The signature does not need to have const&, but the function must not modify the objects passed to it. The types Type1 and Ret must be such that an object of a type as given by the input sequence can be implicitly converted to any of those types.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

Op

The type of the binary function object used for the reduction operation.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

T

The type of the value to be used as initial (and intermediate) values (deduced).

Returns:

The copy_n algorithm returns a hpx::future<OutIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns OutIter otherwise. The exclusive_scan algorithm returns the output iterator to the element in the destination range, one past the last element copied.


Function template exclusive_scan

hpx::parallel::v1::exclusive_scan

Synopsis

// In header: <hpx/parallel/algorithms/exclusive_scan.hpp>


template<typename ExPolicy, typename InIter, typename OutIter, typename T> 
  unspecified exclusive_scan(ExPolicy && policy, InIter first, InIter last, 
                             OutIter dest, T init);

Description

Assigns through each iterator i in [result, result + (last - first)) the value of GENERALIZED_NONCOMMUTATIVE_SUM(+, init, *first, ..., *(first + (i - result) - 1))

[Note]Note

Complexity: O(last - first) applications of the predicate std::plus<T>.

The reduce operations in the parallel exclusive_scan algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The reduce operations in the parallel exclusive_scan algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

[Note]Note

GENERALIZED_NONCOMMUTATIVE_SUM(+, a1, ..., aN) is defined as:

  • a1 when N is 1

  • GENERALIZED_NONCOMMUTATIVE_SUM(+, a1, ..., aK)

    • GENERALIZED_NONCOMMUTATIVE_SUM(+, aM, ..., aN) where 1 < K+1 = M <= N.

The difference between exclusive_scan and inclusive_scan is that inclusive_scan includes the ith input element in the ith sum.

Parameters:

dest

Refers to the beginning of the destination range.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

init

The initial value for the generalized sum.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

T

The type of the value to be used as initial (and intermediate) values (deduced).

Returns:

The copy_n algorithm returns a hpx::future<OutIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns OutIter otherwise. The exclusive_scan algorithm returns the output iterator to the element in the destination range, one past the last element copied.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter, typename T> 
        unspecified fill(ExPolicy &&, InIter, InIter, T);
      template<typename ExPolicy, typename OutIter, typename Size, typename T> 
        unspecified fill_n(ExPolicy &&, OutIter, Size, T);
    }
  }
}

Function template fill

hpx::parallel::v1::fill

Synopsis

// In header: <hpx/parallel/algorithms/fill.hpp>


template<typename ExPolicy, typename InIter, typename T> 
  unspecified fill(ExPolicy && policy, InIter first, InIter last, T value);

Description

Assigns the given value to the elements in the range [first, last).

[Note]Note

Complexity: Performs exactly last - first assignments.

The comparisons in the parallel fill algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparisons in the parallel fill algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

value

The value to be assigned.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

T

The type of the value to be assigned (deduced).

Returns:

The fill algorithm returns a hpx::future<void> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns difference_type otherwise (where difference_type is defined by void.


Function template fill_n

hpx::parallel::v1::fill_n

Synopsis

// In header: <hpx/parallel/algorithms/fill.hpp>


template<typename ExPolicy, typename OutIter, typename Size, typename T> 
  unspecified fill_n(ExPolicy && policy, OutIter first, Size count, T value);

Description

Assigns the given value value to the first count elements in the range beginning at first if count > 0. Does nothing otherwise.

[Note]Note

Complexity: Performs exactly count assignments, for count > 0.

The comparisons in the parallel fill_n algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparisons in the parallel fill_n algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

count

Refers to the number of elements starting at first the algorithm will be applied to.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

value

The value to be assigned.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

OutIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an output iterator.

Size

The type of the argument specifying the number of elements to apply f to.

T

The type of the value to be assigned (deduced).

Returns:

The fill_n algorithm returns a hpx::future<void> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns difference_type otherwise (where difference_type is defined by void.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter, typename T> 
        unspecified find(ExPolicy &&, InIter, InIter, T const &);
      template<typename ExPolicy, typename InIter, typename F> 
        unspecified find_if(ExPolicy &&, InIter, InIter, F &&);
      template<typename ExPolicy, typename InIter, typename F> 
        unspecified find_if_not(ExPolicy &&, InIter, InIter, F &&);
      template<typename ExPolicy, typename FwdIter1, typename FwdIter2> 
        unspecified find_end(ExPolicy &&, FwdIter1, FwdIter1, FwdIter2, 
                             FwdIter2);
      template<typename ExPolicy, typename FwdIter1, typename FwdIter2, 
               typename F> 
        unspecified find_end(ExPolicy &&, FwdIter1, FwdIter1, FwdIter2, 
                             FwdIter2, F &&);
      template<typename ExPolicy, typename InIter, typename FwdIter> 
        unspecified find_first_of(ExPolicy &&, InIter, InIter, FwdIter, 
                                  FwdIter);
      template<typename ExPolicy, typename InIter, typename FwdIter, 
               typename Pred> 
        unspecified find_first_of(ExPolicy &&, InIter, InIter, FwdIter, 
                                  FwdIter, Pred &&);
    }
  }
}

Function template find

hpx::parallel::v1::find

Synopsis

// In header: <hpx/parallel/algorithms/find.hpp>


template<typename ExPolicy, typename InIter, typename T> 
  unspecified find(ExPolicy && policy, InIter first, InIter last, 
                   T const & val);

Description

Returns the first element in the range [first, last) that is equal to value

[Note]Note

Complexity: At most last - first applications of the operator==().

The comparison operations in the parallel find algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparison operations in the parallel find algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

first

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

last

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

val

the value to compare the elements to

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter

The type of the source iterators used for the first range (deduced). This iterator type must meet the requirements of an input iterator.

T

The type of the value to find (deduced).

Returns:

The find algorithm returns a hpx::future<InIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns InIter otherwise. The find algorithm returns the first element in the range [first,last) that is equal to val. If no such element in the range of [first,last) is equal to val, then the algorithm returns last.


Function template find_if

hpx::parallel::v1::find_if

Synopsis

// In header: <hpx/parallel/algorithms/find.hpp>


template<typename ExPolicy, typename InIter, typename F> 
  unspecified find_if(ExPolicy && policy, InIter first, InIter last, F && f);

Description

Returns the first element in the range [first, last) for which predicate f returns true

[Note]Note

Complexity: At most last - first applications of the predicate.

The comparison operations in the parallel find_if algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparison operations in the parallel find_if algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

f

The unary predicate which returns true for the required element. The signature of the predicate should be equivalent to:

bool pred(const Type &a);


The signature does not need to have const &, but the function must not modify the objects passed to it. The type Type must be such that objects of type InIter can be dereferenced and then implicitly converted to Type.

first

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

last

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of equal requires F to meet the requirements of CopyConstructible.

InIter

The type of the source iterators used for the first range (deduced). This iterator type must meet the requirements of a input iterator.

Returns:

The find_if algorithm returns a hpx::future<InIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns InIter otherwise. The find_if algorithm returns the first element in the range [first,last) that satisfies the predicate f. If no such element exists that satisfies the predicate f, the algorithm returns last.


Function template find_if_not

hpx::parallel::v1::find_if_not

Synopsis

// In header: <hpx/parallel/algorithms/find.hpp>


template<typename ExPolicy, typename InIter, typename F> 
  unspecified find_if_not(ExPolicy && policy, InIter first, InIter last, 
                          F && f);

Description

Returns the first element in the range [first, last) for which predicate f returns false

[Note]Note

Complexity: At most last - first applications of the predicate.

The comparison operations in the parallel find_if_not algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparison operations in the parallel find_if_not algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

f

The unary predicate which returns false for the required element. The signature of the predicate should be equivalent to:

bool pred(const Type &a);


The signature does not need to have const &, but the function must not modify the objects passed to it. The type Type must be such that objects of type InIter can be dereferenced and then implicitly converted to Type.

first

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

last

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of equal requires F to meet the requirements of CopyConstructible.

InIter

The type of the source iterators used for the first range (deduced). This iterator type must meet the requirements of a input iterator.

Returns:

The find_if_not algorithm returns a hpx::future<InIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns InIter otherwise. The find_if_not algorithm returns the first element in the range [first, last) that does not satisfy the predicate f. If no such element exists that does not satisfy the predicate f, the algorithm returns last.


Function template find_end

hpx::parallel::v1::find_end

Synopsis

// In header: <hpx/parallel/algorithms/find.hpp>


template<typename ExPolicy, typename FwdIter1, typename FwdIter2> 
  unspecified find_end(ExPolicy && policy, FwdIter1 first1, FwdIter1 last1, 
                       FwdIter2 first2, FwdIter2 last2);

Description

Returns the last subsequence of elements [first2,last2) found in the range [first,last) using the operator== to compare elements.

[Note]Note

Complexity: at most S*(N-S+1) comparisons where S = distance(first2, last2) and N = distance(first1, last1).

The comparison operations in the parallel find_end algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparison operations in the parallel find_end algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

first1

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

first2

Refers to the beginning of the sequence of elements the algorithm will be searching for.

last1

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

last2

Refers to the end of the sequence of elements of the algorithm will be searching for.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

FwdIter1

The type of the source iterators used for the first range (deduced). This iterator type must meet the requirements of an forward iterator.

FwdIter2

The type of the source iterators used for the second range (deduced). This iterator type must meet the requirements of an forward iterator.

Returns:

The find_end algorithm returns a hpx::future<FwdIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns FwdIter otherwise. The find_end algorithm returns an iterator to the beginning of the last subsequence [first2, last2) in range [first, last). If the length of the subsequence [first2, last2) is greater than the length of the range [first1, last1), last1 is returned. Additionally if the size of the subsequence is empty or no subsequence is found, last1 is also returned.


Function template find_end

hpx::parallel::v1::find_end

Synopsis

// In header: <hpx/parallel/algorithms/find.hpp>


template<typename ExPolicy, typename FwdIter1, typename FwdIter2, typename F> 
  unspecified find_end(ExPolicy && policy, FwdIter1 first1, FwdIter1 last1, 
                       FwdIter2 first2, FwdIter2 last2, F && f);

Description

Returns the last subsequence of elements [first2, last2) found in the range [first,last) using the given predicate f to compare elements.

[Note]Note

Complexity: at most S*(N-S+1) comparisons where S = distance(first2, last2) and N = distance(first1, last1).

The comparison operations in the parallel find_end algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparison operations in the parallel find_end algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

This overload of find_end is available if the user decides to provide the algorithm their own predicate f.

Parameters:

f

The binary predicate which returns true if the elements should be treated as equal. The signature should be equivalent to the following:

bool pred(const Type1 &a, const Type2 &b);


The signature does not need to have const &, but the function must not modify the objects passed to it. The types Type1 and Type2 must be such that objects of types FwdIter1 and FwdIter2 can be dereferenced and then implicitly converted to Type1 and Type2 respectively.

first1

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

first2

Refers to the beginning of the sequence of elements the algorithm will be searching for.

last1

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

last2

Refers to the end of the sequence of elements of the algorithm will be searching for.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of replace requires F to meet the requirements of CopyConstructible.

FwdIter1

The type of the source iterators used for the first range (deduced). This iterator type must meet the requirements of an forward iterator.

FwdIter2

The type of the source iterators used for the second range (deduced). This iterator type must meet the requirements of an forward iterator.

Returns:

The find_end algorithm returns a hpx::future<FwdIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns FwdIter otherwise. The find_end algorithm returns an iterator to the beginning of the last subsequence [first2, last2) in range [first, last). If the length of the subsequence [first2, last2) is greater than the length of the range [first1, last1), last1 is returned. Additionally if the size of the subsequence is empty or no subsequence is found, last1 is also returned.


Function template find_first_of

hpx::parallel::v1::find_first_of

Synopsis

// In header: <hpx/parallel/algorithms/find.hpp>


template<typename ExPolicy, typename InIter, typename FwdIter> 
  unspecified find_first_of(ExPolicy && policy, InIter first, InIter last, 
                            FwdIter s_first, FwdIter s_last);

Description

Searches the range [first, last) for any elements in the range [s_first, s_last). Uses opeartor== to compare elements.

[Note]Note

Complexity: at most (S*N) comparisons where S = distance(s_first, s_last) and N = distance(first, last).

The comparison operations in the parallel find_first_of algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparison operations in the parallel find_first_of algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

first

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

last

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

s_first

Refers to the beginning of the sequence of elements the algorithm will be searching for.

s_last

Refers to the end of the sequence of elements of the algorithm will be searching for.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

FwdIter

The type of the source iterators used for the second range (deduced). This iterator type must meet the requirements of an forward iterator.

InIter

The type of the source iterators used for the first range (deduced). This iterator type must meet the requirements of an input iterator.

Returns:

The find_first_of algorithm returns a hpx::future<InIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns InIter otherwise. The find_first_of algorithm returns an iterator to the first element in the range [first, last) that is equal to an element from the range [s_first, s_last). If the length of the subsequence [s_first, s_last) is greater than the length of the range [first, last), last is returned. Additionally if the size of the subsequence is empty or no subsequence is found, last is also returned.


Function template find_first_of

hpx::parallel::v1::find_first_of

Synopsis

// In header: <hpx/parallel/algorithms/find.hpp>


template<typename ExPolicy, typename InIter, typename FwdIter, typename Pred> 
  unspecified find_first_of(ExPolicy && policy, InIter first, InIter last, 
                            FwdIter s_first, FwdIter s_last, Pred && op);

Description

Searches the range [first, last) for any elements in the range [s_first, s_last). Uses binary predicate p to compare elements

[Note]Note

Complexity: at most (S*N) comparisons where S = distance(s_first, s_last) and N = distance(first, last).

The comparison operations in the parallel find_first_of algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparison operations in the parallel find_first_of algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

first

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

last

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

op

The binary predicate which returns true if the elements should be treated as equal. The signature should be equivalent to the following:

bool pred(const Type1 &a, const Type2 &b);


The signature does not need to have const &, but the function must not modify the objects passed to it. The types Type1 and Type2 must be such that objects of types FwdIter1 and FwdIter2 can be dereferenced and then implicitly converted to Type1 and Type2 respectively.

policy

The execution policy to use for the scheduling of the iterations.

s_first

Refers to the beginning of the sequence of elements the algorithm will be searching for.

s_last

Refers to the end of the sequence of elements of the algorithm will be searching for.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

FwdIter

The type of the source iterators used for the second range (deduced). This iterator type must meet the requirements of an forward iterator.

InIter

The type of the source iterators used for the first range (deduced). This iterator type must meet the requirements of an input iterator.

Pred

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of equal requires Pred to meet the requirements of CopyConstructible.

Returns:

The find_first_of algorithm returns a hpx::future<InIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns InIter otherwise. The find_first_of algorithm returns an iterator to the first element in the range [first, last) that is equal to an element from the range [s_first, s_last). If the length of the subsequence [s_first, s_last) is greater than the length of the range [first, last), last is returned. Additionally if the size of the subsequence is empty or no subsequence is found, last is also returned. This overload of find_end is available if the user decides to provide the algorithm their own predicate f.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter, typename Size, typename F, 
               typename Proj = util::projection_identity, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< InIter >::value &&traits::is_projected< Proj, InIter >::value &&traits::is_indirect_callable< F, traits::projected< Proj, InIter > >::value) > 
        unspecified for_each_n(ExPolicy &&, InIter, Size, F &&, 
                               Proj && = Proj());
      template<typename ExPolicy, typename InIter, typename F, 
               typename Proj = util::projection_identity, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< InIter >::value &&traits::is_projected< Proj, InIter >::value &&traits::is_indirect_callable< F, traits::projected< Proj, InIter > >::value) > 
        unspecified for_each(ExPolicy &&, InIter, InIter, F &&, 
                             Proj && = Proj());
    }
  }
}

Function template for_each_n

hpx::parallel::v1::for_each_n

Synopsis

// In header: <hpx/parallel/algorithms/for_each.hpp>


template<typename ExPolicy, typename InIter, typename Size, typename F, 
         typename Proj = util::projection_identity, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< InIter >::value &&traits::is_projected< Proj, InIter >::value &&traits::is_indirect_callable< F, traits::projected< Proj, InIter > >::value) > 
  unspecified for_each_n(ExPolicy && policy, InIter first, Size count, F && f, 
                         Proj && proj = Proj());

Description

Applies f to the result of dereferencing every iterator in the range [first, first + count), starting from first and proceeding to first + count - 1.

[Note]Note

Complexity: Applies f exactly count times.

If f returns a result, the result is ignored.

If the type of first satisfies the requirements of a mutable iterator, f may apply non-constant functions through the dereferenced iterator.

Unlike its sequential form, the parallel overload of for_each does not return a copy of its Function parameter, since parallelization may not permit efficient state accumulation.

The application of function objects in parallel algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The application of function objects in parallel algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

count

Refers to the number of elements starting at first the algorithm will be applied to.

f

Specifies the function (or function object) which will be invoked for each of the elements in the sequence specified by [first, last). The signature of this predicate should be equivalent to:

<ignored> pred(const Type &a);


The signature does not need to have const&. The type Type must be such that an object of type InIter can be dereferenced and then implicitly converted to Type.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

proj

Specifies the function (or function object) which will be invoked for each of the elements as a projection operation before the actual predicate f is invoked.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it applies user-provided function objects.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of for_each requires F to meet the requirements of CopyConstructible.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

Proj

The type of an optional projection function. This defaults to util::projection_identity

Size

The type of the argument specifying the number of elements to apply f to.

Returns:

The for_each_n algorithm returns a hpx::future<InIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns InIter otherwise. It returns first + count for non-negative values of count and first for negative values.


Function template for_each

hpx::parallel::v1::for_each

Synopsis

// In header: <hpx/parallel/algorithms/for_each.hpp>


template<typename ExPolicy, typename InIter, typename F, 
         typename Proj = util::projection_identity, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< InIter >::value &&traits::is_projected< Proj, InIter >::value &&traits::is_indirect_callable< F, traits::projected< Proj, InIter > >::value) > 
  unspecified for_each(ExPolicy && policy, InIter first, InIter last, F && f, 
                       Proj && proj = Proj());

Description

Applies f to the result of dereferencing every iterator in the range [first, last).

[Note]Note

Complexity: Applies f exactly last - first times.

If f returns a result, the result is ignored.

If the type of first satisfies the requirements of a mutable iterator, f may apply non-constant functions through the dereferenced iterator.

Unlike its sequential form, the parallel overload of for_each does not return a copy of its Function parameter, since parallelization may not permit efficient state accumulation.

The application of function objects in parallel algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The application of function objects in parallel algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

f

Specifies the function (or function object) which will be invoked for each of the elements in the sequence specified by [first, last). The signature of this predicate should be equivalent to:

<ignored> pred(const Type &a);


The signature does not need to have const&. The type Type must be such that an object of type InIter can be dereferenced and then implicitly converted to Type.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

proj

Specifies the function (or function object) which will be invoked for each of the elements as a projection operation before the actual predicate f is invoked.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it applies user-provided function objects.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of for_each requires F to meet the requirements of CopyConstructible.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

Proj

The type of an optional projection function. This defaults to util::projection_identity

Returns:

The for_each algorithm returns a hpx::future<InIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns InIter otherwise. It returns last.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename Rng, typename F, 
               typename Proj = util::projection_identity, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_projected_range< Proj, Rng >::value &&traits::is_indirect_callable< F, traits::projected_range< Proj, Rng > >::value) > 
        unspecified for_each(ExPolicy &&, Rng &&, F &&, Proj && = Proj());
    }
  }
}

Function template for_each

hpx::parallel::v1::for_each

Synopsis

// In header: <hpx/parallel/container_algorithms/for_each.hpp>


template<typename ExPolicy, typename Rng, typename F, 
         typename Proj = util::projection_identity, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_projected_range< Proj, Rng >::value &&traits::is_indirect_callable< F, traits::projected_range< Proj, Rng > >::value) > 
  unspecified for_each(ExPolicy && policy, Rng && rng, F && f, 
                       Proj && proj = Proj());

Description

Applies f to the result of dereferencing every iterator in the given range rng.

[Note]Note

Complexity: Applies f exactly size(rng) times.

If f returns a result, the result is ignored.

If the type of first satisfies the requirements of a mutable iterator, f may apply non-constant functions through the dereferenced iterator.

Unlike its sequential form, the parallel overload of for_each does not return a copy of its Function parameter, since parallelization may not permit efficient state accumulation.

The application of function objects in parallel algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The application of function objects in parallel algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

f

Specifies the function (or function object) which will be invoked for each of the elements in the sequence specified by [first, last). The signature of this predicate should be equivalent to:

<ignored> pred(const Type &a);


The signature does not need to have const&. The type Type must be such that an object of type InIter can be dereferenced and then implicitly converted to Type.

policy

The execution policy to use for the scheduling of the iterations.

proj

Specifies the function (or function object) which will be invoked for each of the elements as a projection operation before the actual predicate is invoked.

rng

Refers to the sequence of elements the algorithm will be applied to.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it applies user-provided function objects.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of for_each requires F to meet the requirements of CopyConstructible.

Proj

The type of an optional projection function. This defaults to util::projection_identity

Rng

The type of the source range used (deduced). The iterators extracted from this range type must meet the requirements of an input iterator.

Returns:

The for_each algorithm returns a hpx::future<InIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns InIter otherwise. It returns last.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename FwdIter, typename F, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< FwdIter >::value) > 
        unspecified generate(ExPolicy &&, FwdIter, FwdIter, F &&);
      template<typename ExPolicy, typename OutIter, typename Size, typename F, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< OutIter >::value) > 
        unspecified generate_n(ExPolicy &&, OutIter, Size, F &&);
    }
  }
}

Function template generate

hpx::parallel::v1::generate

Synopsis

// In header: <hpx/parallel/algorithms/generate.hpp>


template<typename ExPolicy, typename FwdIter, typename F, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< FwdIter >::value) > 
  unspecified generate(ExPolicy && policy, FwdIter first, FwdIter last, 
                       F && f);

Description

Assign each element in range [first, last) a value generated by the given function object f

[Note]Note

Complexity: Exactly distance(first, last) invocations of f and assignments.

The assignments in the parallel generate algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel generate algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

f

generator function that will be called. signature of function should be equivalent to the following:

Ret fun();


The type Ret must be such that an object of type FwdIter can be dereferenced and assigned a value of type Ret.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of equal requires F to meet the requirements of CopyConstructible.

FwdIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of a forward iterator.

Returns:

The replace_if algorithm returns a hpx::future<FwdIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns FwdIter otherwise. It returns last.


Function template generate_n

hpx::parallel::v1::generate_n

Synopsis

// In header: <hpx/parallel/algorithms/generate.hpp>


template<typename ExPolicy, typename OutIter, typename Size, typename F, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< OutIter >::value) > 
  unspecified generate_n(ExPolicy && policy, OutIter first, Size count, 
                         F && f);

Description

Assigns each element in range [first, first+count) a value generated by the given function object g.

[Note]Note

Complexity: Exactly count invocations of f and assignments, for count > 0.

The assignments in the parallel generate_n algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel generate_n algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

count

Refers to the number of elements in the sequence the algorithm will be applied to.

f

Refers to the generator function object that will be called. The signature of the function should be equivalent to

Ret fun();


The type Ret must be such that an object of type OutputIt can be dereferenced and assigned a value of type Ret.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of equal requires F to meet the requirements of CopyConstructible.

OutIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an output iterator.

Returns:

The replace_if algorithm returns a hpx::future<OutIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns OutIter otherwise. It returns last.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename Rng, typename F, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value) > 
        unspecified generate(ExPolicy &&, Rng &&, F &&);
    }
  }
}

Function template generate

hpx::parallel::v1::generate

Synopsis

// In header: <hpx/parallel/container_algorithms/generate.hpp>


template<typename ExPolicy, typename Rng, typename F, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value) > 
  unspecified generate(ExPolicy && policy, Rng && rng, F && f);

Description

Assign each element in range [first, last) a value generated by the given function object f

[Note]Note

Complexity: Exactly distance(first, last) invocations of f and assignments.

The assignments in the parallel generate algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel generate algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

f

generator function that will be called. signature of function should be equivalent to the following:

Ret fun();


The type Ret must be such that an object of type FwdIter can be dereferenced and assigned a value of type Ret.

policy

The execution policy to use for the scheduling of the iterations.

rng

Refers to the sequence of elements the algorithm will be applied to.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of equal requires F to meet the requirements of CopyConstructible.

Rng

The type of the source range used (deduced). The iterators extracted from this range type must meet the requirements of an forward iterator.

Returns:

The replace_if algorithm returns a hpx::future<FwdIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns FwdIter otherwise. It returns last.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter1, typename InIter2> 
        unspecified includes(ExPolicy &&, InIter1, InIter1, InIter2, InIter2);
      template<typename ExPolicy, typename InIter1, typename InIter2, 
               typename F> 
        unspecified includes(ExPolicy &&, InIter1, InIter1, InIter2, InIter2, 
                             F &&);
    }
  }
}

Function template includes

hpx::parallel::v1::includes

Synopsis

// In header: <hpx/parallel/algorithms/includes.hpp>


template<typename ExPolicy, typename InIter1, typename InIter2> 
  unspecified includes(ExPolicy && policy, InIter1 first1, InIter1 last1, 
                       InIter2 first2, InIter2 last2);

Description

Returns true if every element from the sorted range [first2, last2) is found within the sorted range [first1, last1). Also returns true if [first2, last2) is empty. The version expects both ranges to be sorted with operator<().

[Note]Note

At most 2*(N1+N2-1) comparisons, where N1 = std::distance(first1, last1) and N2 = std::distance(first2, last2).

The comparison operations in the parallel includes algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparison operations in the parallel includes algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

first1

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

first2

Refers to the beginning of the sequence of elements of the second range the algorithm will be applied to.

last1

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

last2

Refers to the end of the sequence of elements of the second range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter1

The type of the source iterators used for the first range (deduced). This iterator type must meet the requirements of an input iterator.

InIter2

The type of the source iterators used for the second range (deduced). This iterator type must meet the requirements of an input iterator.

Returns:

The includes algorithm returns a hpx::future<bool> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns bool otherwise. The includes algorithm returns true every element from the sorted range [first2, last2) is found within the sorted range [first1, last1). Also returns true if [first2, last2) is empty.


Function template includes

hpx::parallel::v1::includes

Synopsis

// In header: <hpx/parallel/algorithms/includes.hpp>


template<typename ExPolicy, typename InIter1, typename InIter2, typename F> 
  unspecified includes(ExPolicy && policy, InIter1 first1, InIter1 last1, 
                       InIter2 first2, InIter2 last2, F && f);

Description

Returns true if every element from the sorted range [first2, last2) is found within the sorted range [first1, last1). Also returns true if [first2, last2) is empty. The version expects both ranges to be sorted with the user supplied binary predicate f.

[Note]Note

At most 2*(N1+N2-1) comparisons, where N1 = std::distance(first1, last1) and N2 = std::distance(first2, last2).

The comparison operations in the parallel includes algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparison operations in the parallel includes algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

f

The binary predicate which returns true if the elements should be treated as includes. The signature of the predicate function should be equivalent to the following:

bool pred(const Type1 &a, const Type2 &b);


The signature does not need to have const &, but the function must not modify the objects passed to it. The types Type1 and Type2 must be such that objects of types InIter1 and InIter2 can be dereferenced and then implicitly converted to Type1 and Type2 respectively

first1

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

first2

Refers to the beginning of the sequence of elements of the second range the algorithm will be applied to.

last1

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

last2

Refers to the end of the sequence of elements of the second range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of includes requires F to meet the requirements of CopyConstructible.

InIter1

The type of the source iterators used for the first range (deduced). This iterator type must meet the requirements of an input iterator.

InIter2

The type of the source iterators used for the second range (deduced). This iterator type must meet the requirements of an input iterator.

Returns:

The includes algorithm returns a hpx::future<bool> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns bool otherwise. The includes algorithm returns true every element from the sorted range [first2, last2) is found within the sorted range [first1, last1). Also returns true if [first2, last2) is empty.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter, typename OutIter, 
               typename T, typename Op> 
        unspecified inclusive_scan(ExPolicy &&, InIter, InIter, OutIter, T, 
                                   Op &&);
      template<typename ExPolicy, typename InIter, typename OutIter, 
               typename T> 
        unspecified inclusive_scan(ExPolicy &&, InIter, InIter, OutIter, T);
      template<typename ExPolicy, typename InIter, typename OutIter> 
        unspecified inclusive_scan(ExPolicy &&, InIter, InIter, OutIter);
    }
  }
}

Function template inclusive_scan

hpx::parallel::v1::inclusive_scan

Synopsis

// In header: <hpx/parallel/algorithms/inclusive_scan.hpp>


template<typename ExPolicy, typename InIter, typename OutIter, typename T, 
         typename Op> 
  unspecified inclusive_scan(ExPolicy && policy, InIter first, InIter last, 
                             OutIter dest, T init, Op && op);

Description

Assigns through each iterator i in [result, result + (last - first)) the value of GENERALIZED_NONCOMMUTATIVE_SUM(op, init, *first, ..., *(first + (i - result))).

[Note]Note

Complexity: O(last - first) applications of the predicate op.

The reduce operations in the parallel inclusive_scan algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The reduce operations in the parallel inclusive_scan algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

[Note]Note

GENERALIZED_NONCOMMUTATIVE_SUM(op, a1, ..., aN) is defined as:

  • a1 when N is 1

  • op(GENERALIZED_NONCOMMUTATIVE_SUM(op, a1, ..., aK), GENERALIZED_NONCOMMUTATIVE_SUM(op, aM, ..., aN)) where 1 < K+1 = M <= N.

The difference between exclusive_scan and inclusive_scan is that inclusive_scan includes the ith input element in the ith sum. If op is not mathematically associative, the behavior of inclusive_scan may be non-deterministic.

Parameters:

dest

Refers to the beginning of the destination range.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

init

The initial value for the generalized sum.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

op

Specifies the function (or function object) which will be invoked for each of the values of the input sequence. This is a binary predicate. The signature of this predicate should be equivalent to:

Ret fun(const Type1 &a, const Type1 &b);


The signature does not need to have const&, but the function must not modify the objects passed to it. The types Type1 and Ret must be such that an object of a type as given by the input sequence can be implicitly converted to any of those types.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

Op

The type of the binary function object used for the reduction operation.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

T

The type of the value to be used as initial (and intermediate) values (deduced).

Returns:

The copy_n algorithm returns a hpx::future<OutIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns OutIter otherwise. The inclusive_scan algorithm returns the output iterator to the element in the destination range, one past the last element copied.


Function template inclusive_scan

hpx::parallel::v1::inclusive_scan

Synopsis

// In header: <hpx/parallel/algorithms/inclusive_scan.hpp>


template<typename ExPolicy, typename InIter, typename OutIter, typename T> 
  unspecified inclusive_scan(ExPolicy && policy, InIter first, InIter last, 
                             OutIter dest, T init);

Description

Assigns through each iterator i in [result, result + (last - first)) the value of GENERALIZED_NONCOMMUTATIVE_SUM(+, init, *first, ..., *(first + (i - result))).

[Note]Note

Complexity: O(last - first) applications of the predicate op.

The reduce operations in the parallel inclusive_scan algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The reduce operations in the parallel inclusive_scan algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

[Note]Note

GENERALIZED_NONCOMMUTATIVE_SUM(+, a1, ..., aN) is defined as:

  • a1 when N is 1

  • GENERALIZED_NONCOMMUTATIVE_SUM(op, a1, ..., aK)

    • GENERALIZED_NONCOMMUTATIVE_SUM(+, aM, ..., aN) where 1 < K+1 = M <= N.

The difference between exclusive_scan and inclusive_scan is that inclusive_scan includes the ith input element in the ith sum.

Parameters:

dest

Refers to the beginning of the destination range.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

init

The initial value for the generalized sum.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

T

The type of the value to be used as initial (and intermediate) values (deduced).

Returns:

The copy_n algorithm returns a hpx::future<OutIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns OutIter otherwise. The inclusive_scan algorithm returns the output iterator to the element in the destination range, one past the last element copied.


Function template inclusive_scan

hpx::parallel::v1::inclusive_scan

Synopsis

// In header: <hpx/parallel/algorithms/inclusive_scan.hpp>


template<typename ExPolicy, typename InIter, typename OutIter> 
  unspecified inclusive_scan(ExPolicy && policy, InIter first, InIter last, 
                             OutIter dest);

Description

Assigns through each iterator i in [result, result + (last - first)) the value of GENERALIZED_NONCOMMUTATIVE_SUM(+, *first, ..., *(first + (i - result))).

[Note]Note

Complexity: O(last - first) applications of the predicate op.

The reduce operations in the parallel inclusive_scan algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The reduce operations in the parallel inclusive_scan algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

[Note]Note

GENERALIZED_NONCOMMUTATIVE_SUM(+, a1, ..., aN) is defined as:

  • a1 when N is 1

  • GENERALIZED_NONCOMMUTATIVE_SUM(+, a1, ..., aK)

    • GENERALIZED_NONCOMMUTATIVE_SUM(+, aM, ..., aN) where 1 < K+1 = M <= N.

The difference between exclusive_scan and inclusive_scan is that inclusive_scan includes the ith input element in the ith sum.

Parameters:

dest

Refers to the beginning of the destination range.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Returns:

The copy_n algorithm returns a hpx::future<OutIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns OutIter otherwise. The inclusive_scan algorithm returns the output iterator to the element in the destination range, one past the last element copied.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter1, typename InIter2, 
               typename T> 
        unspecified inner_product(ExPolicy &&, InIter1, InIter1, InIter2, T);
      template<typename ExPolicy, typename InIter1, typename InIter2, 
               typename T, typename Op1, typename Op2> 
        unspecified inner_product(ExPolicy &&, InIter1, InIter1, InIter2, T, 
                                  Op1 &&, Op2 &&);
    }
  }
}

Function template inner_product

hpx::parallel::v1::inner_product

Synopsis

// In header: <hpx/parallel/algorithms/inner_product.hpp>


template<typename ExPolicy, typename InIter1, typename InIter2, typename T> 
  unspecified inner_product(ExPolicy && policy, InIter1 first1, InIter1 last1, 
                            InIter2 first2, T init);

Description

Returns the result of accumulating init with the inner products of the pairs formed by the elements of two ranges starting at first1 and first2.

[Note]Note

Complexity: O(last - first) applications of the predicate op2.

The operations in the parallel inner_product algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The operations in the parallel inner_product algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

first1

Refers to the beginning of the first sequence of elements the result will be calculated with.

first2

Refers to the beginning of the second sequence of elements the result will be calculated with.

init

The initial value for the sum.

last1

Refers to the end of the first sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter1

The type of the first source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

InIter2

The type of the second source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

T

The type of the value to be used as return) values (deduced).

Returns:

The inner_product algorithm returns a hpx::future<T> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns OutIter otherwise.


Function template inner_product

hpx::parallel::v1::inner_product

Synopsis

// In header: <hpx/parallel/algorithms/inner_product.hpp>


template<typename ExPolicy, typename InIter1, typename InIter2, typename T, 
         typename Op1, typename Op2> 
  unspecified inner_product(ExPolicy && policy, InIter1 first1, InIter1 last1, 
                            InIter2 first2, T init, Op1 && op1, Op2 && op2);

Description

Returns the result of accumulating init with the inner products of the pairs formed by the elements of two ranges starting at first1 and first2.

[Note]Note

Complexity: O(last - first) applications of the predicate op2.

The operations in the parallel inner_product algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The operations in the parallel inner_product algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

first1

Refers to the beginning of the first sequence of elements the result will be calculated with.

first2

Refers to the beginning of the second sequence of elements the result will be calculated with.

init

The initial value for the sum.

last1

Refers to the end of the first sequence of elements the algorithm will be applied to.

op1

Specifies the function (or function object) which will be invoked for each of the input values of the sequence. This is a binary predicate. The signature of this predicate should be equivalent to

Ret fun(const Type1 &a, const Type2 &b);


The signature does not need to have const&, but the function must not modify the objects passed to it. The type Ret must be such that it can be implicitly converted to an object for the second argument type of op2.

op2

Specifies the function (or function object) which will be invoked for the initial value and each of the return values of op1. This is a binary predicate. The signature of this predicate should be equivalent to should be equivalent to:

Ret fun(const Type1 &a, const Type1 &b);


The signature does not need to have const&, but the function must not modify the objects passed to it. The type Ret must be such that it can be implicitly converted to a type of .

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter1

The type of the first source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

InIter2

The type of the second source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

Op1

The type of the binary function object used for the summation operation.

Op2

The type of the binary function object used for the multiplication operation.

T

The type of the value to be used as return) values (deduced).

Returns:

The inner_product algorithm returns a hpx::future<T> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns OutIter otherwise.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter, typename Pred> 
        unspecified is_partitioned(ExPolicy &&, InIter, InIter, Pred &&);
    }
  }
}

Function template is_partitioned

hpx::parallel::v1::is_partitioned

Synopsis

// In header: <hpx/parallel/algorithms/is_partitioned.hpp>


template<typename ExPolicy, typename InIter, typename Pred> 
  unspecified is_partitioned(ExPolicy && policy, InIter first, InIter last, 
                             Pred && pred);

Description

Determines if the range [first, last) is partitioned.

[Note]Note

Complexity: at most (N) predicate evaluations where N = distance(first, last).

The predicate operations in the parallel is_partitioned algorithm invoked with an execution policy object of type sequential_execution_policy executes in sequential order in the calling thread.

The comparison operations in the parallel is_partitioned algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

first

Refers to the beginning of the sequence of elements of that the algorithm will be applied to.

last

Refers to the end of the sequence of elements of that the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

pred

Refers to the binary predicate which returns true if the first argument should be treated as less than the second argument. The signature of the function should be equivalent to

bool pred(const Type &a, const Type &b);


The signature does not need to have const &, but the function must not modify the objects passed to it. The type Type must be such that objects of types InIter can be dereferenced and then implicitly converted to Type.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter

The type of the source iterators used for the This iterator type must meet the requirements of a input iterator.

Returns:

The is_partitioned algorithm returns a hpx::future<bool> if the execution policy is of type task_execution_policy and returns bool otherwise. The is_partitioned algorithm returns true if each element in the sequence for which pred returns true precedes those for which pred returns false. Otherwise is_partitioned returns false. If the range [first, last) containes less than two elements, the function is always true.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename FwdIter, typename Pred> 
        unspecified is_sorted(ExPolicy &&, FwdIter, FwdIter, Pred &&);
      template<typename ExPolicy, typename FwdIter> 
        unspecified is_sorted(ExPolicy &&, FwdIter, FwdIter);
      template<typename ExPolicy, typename FwdIter, typename Pred> 
        unspecified is_sorted_until(ExPolicy &&, FwdIter, FwdIter, Pred &&);
      template<typename ExPolicy, typename FwdIter> 
        unspecified is_sorted_until(ExPolicy &&, FwdIter, FwdIter);
    }
  }
}

Function template is_sorted

hpx::parallel::v1::is_sorted

Synopsis

// In header: <hpx/parallel/algorithms/is_sorted.hpp>


template<typename ExPolicy, typename FwdIter, typename Pred> 
  unspecified is_sorted(ExPolicy && policy, FwdIter first, FwdIter last, 
                        Pred && pred);

Description

Determines if the range [first, last) is sorted. Uses pred to compare elements.

[Note]Note

Complexity: at most (N+S-1) comparisons where N = distance(first, last). S = number of partitions

The comparison operations in the parallel is_sorted algorithm invoked with an execution policy object of type sequential_execution_policy executes in sequential order in the calling thread.

The comparison operations in the parallel is_sorted algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

first

Refers to the beginning of the sequence of elements of that the algorithm will be applied to.

last

Refers to the end of the sequence of elements of that the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

pred

Refers to the binary predicate which returns true if the first argument should be treated as less than the second argument. The signature of the function should be equivalent to

bool pred(const Type &a, const Type &b);


The signature does not need to have const &, but the function must not modify the objects passed to it. The type Type must be such that objects of types FwdIter can be dereferenced and then implicitly converted to Type.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

FwdIter

The type of the source iterators used for the This iterator type must meet the requirements of a forward iterator.

Returns:

The is_sorted algorithm returns a hpx::future<bool> if the execution policy is of type task_execution_policy and returns bool otherwise. The is_sorted algorithm returns a bool if each element in the sequence [first, last) satisfies the predicate passed. If the range [first, last) contains less than two elements, the function always returns true.


Function template is_sorted

hpx::parallel::v1::is_sorted

Synopsis

// In header: <hpx/parallel/algorithms/is_sorted.hpp>


template<typename ExPolicy, typename FwdIter> 
  unspecified is_sorted(ExPolicy && policy, FwdIter first, FwdIter last);

Description

Determines if the range [first, last) is sorted. Uses operator < to compare elements. elements.

[Note]Note

Complexity: at most (N+S-1) comparisons where N = distance(first, last). S = number of partitions

The comparison operations in the parallel is_sorted algorithm invoked with an execution policy object of type sequential_execution_policy executes in sequential order in the calling thread.

The comparison operations in the parallel is_sorted algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

first

Refers to the beginning of the sequence of elements of that the algorithm will be applied to.

last

Refers to the end of the sequence of elements of that the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

FwdIter

The type of the source iterators used for the This iterator type must meet the requirements of a forward iterator.

Returns:

The is_sorted algorithm returns a hpx::future<bool> if the execution policy is of type task_execution_policy and returns bool otherwise. The is_sorted algorithm returns a bool if each element in the sequence [first, last) is greater than or equal to the previous element. If the range [first, last) contains less than two elements, the function always returns true.


Function template is_sorted_until

hpx::parallel::v1::is_sorted_until

Synopsis

// In header: <hpx/parallel/algorithms/is_sorted.hpp>


template<typename ExPolicy, typename FwdIter, typename Pred> 
  unspecified is_sorted_until(ExPolicy && policy, FwdIter first, FwdIter last, 
                              Pred && pred);

Description

Returns the first element in the range [first, last) that is not sorted. Uses a predicate to compare elements or the less than operator.

[Note]Note

Complexity: at most (N+S-1) comparisons where N = distance(first, last). S = number of partitions

The comparison operations in the parallel is_sorted_until algorithm invoked with an execution policy object of type sequential_execution_policy executes in sequential order in the calling thread.

The comparison operations in the parallel is_sorted_until algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

first

Refers to the beginning of the sequence of elements of that the algorithm will be applied to.

last

Refers to the end of the sequence of elements of that the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

pred

Refers to the binary predicate which returns true if the first argument should be treated as less than the second argument. The signature of the function should be equivalent to

bool pred(const Type &a, const Type &b);


The signature does not need to have const &, but the function must not modify the objects passed to it. The type Type must be such that objects of types FwdIter can be dereferenced and then implicitly converted to Type.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

FwdIter

The type of the source iterators used for the This iterator type must meet the requirements of a forward iterator.

Returns:

The is_sorted_until algorithm returns a hpx::future<FwdIter> if the execution policy is of type task_execution_policy and returns FwdIter otherwise. The is_sorted_until algorithm returns the first unsorted element. If the sequence has less than two elements or the sequence is sorted, last is returned.


Function template is_sorted_until

hpx::parallel::v1::is_sorted_until

Synopsis

// In header: <hpx/parallel/algorithms/is_sorted.hpp>


template<typename ExPolicy, typename FwdIter> 
  unspecified is_sorted_until(ExPolicy && policy, FwdIter first, FwdIter last);

Description

Returns the first element in the range [first, last) that is not sorted.

[Note]Note

Complexity: at most (N+S-1) comparisons where N = distance(first, last). S = number of partitions

The comparison operations in the parallel is_sorted_until algorithm invoked with an execution policy object of type sequential_execution_policy executes in sequential order in the calling thread.

The comparison operations in the parallel is_sorted_until algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

first

Refers to the beginning of the sequence of elements of that the algorithm will be applied to.

last

Refers to the end of the sequence of elements of that the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

FwdIter

The type of the source iterators used for the This iterator type must meet the requirements of a forward iterator.

Returns:

The is_sorted_until algorithm returns a hpx::future<FwdIter> if the execution policy is of type task_execution_policy and returns FwdIter otherwise. The is_sorted_until algorithm returns the first unsorted element. If the sequence has less than two elements or the sequence is sorted, last is returned.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter1, typename InIter2> 
        unspecified lexicographical_compare(ExPolicy &&, InIter1, InIter1, 
                                            InIter2, InIter2);
      template<typename ExPolicy, typename InIter1, typename InIter2, 
               typename Pred> 
        unspecified lexicographical_compare(ExPolicy &&, InIter1, InIter1, 
                                            InIter2, InIter2, Pred &&);
    }
  }
}

Function template lexicographical_compare

hpx::parallel::v1::lexicographical_compare

Synopsis

// In header: <hpx/parallel/algorithms/lexicographical_compare.hpp>


template<typename ExPolicy, typename InIter1, typename InIter2> 
  unspecified lexicographical_compare(ExPolicy && policy, InIter1 first1, 
                                      InIter1 last1, InIter2 first2, 
                                      InIter2 last2);

Description

Checks if the first range [first1, last1) is lexicographically less than the second range [first2, last2). uses operator< to comapre elements.

[Note]Note

Complexity: At most 2 * min(N1, N2) applications of the comparison operation <, where N1 = std::distance(first1, last) and N2 = std::distance(first2, last2).

The comparison operations in the parallel lexicographical_compare algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparison operations in the parallel lexicographical_compare algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

[Note]Note

Lexicographical comparison is an operation with the following properties

  • Two ranges are compared element by element

  • The first mismatching element defines which range is lexicographically less or greater than the other

  • If one range is a prefix of another, the shorter range is lexicographically less than the other

  • If two ranges have equivalent elements and are of the same length, then the ranges are lexicographically equal

  • An empty range is lexicographically less than any non-empty range

  • Two empty ranges are lexicographically equal

Parameters:

first1

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

first2

Refers to the beginning of the sequence of elements of the second range the algorithm will be applied to.

last1

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

last2

Refers to the end of the sequence of elements of the second range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter1

The type of the source iterators used for the first range (deduced). This iterator type must meet the requirements of an input iterator.

InIter2

The type of the source iterators used for the second range (deduced). This iterator type must meet the requirements of an input iterator.

Returns:

The lexicographically_compare algorithm returns a hpx::future<bool> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns bool otherwise. The lexicographically_compare algorithm returns true if the first range is lexicographically less, otherwise it returns false.


Function template lexicographical_compare

hpx::parallel::v1::lexicographical_compare

Synopsis

// In header: <hpx/parallel/algorithms/lexicographical_compare.hpp>


template<typename ExPolicy, typename InIter1, typename InIter2, typename Pred> 
  unspecified lexicographical_compare(ExPolicy && policy, InIter1 first1, 
                                      InIter1 last1, InIter2 first2, 
                                      InIter2 last2, Pred && pred);

Description

Checks if the first range [first1, last1) is lexicographically less than the second range [first2, last2). uses a provided predicate to comapre elements.

[Note]Note

Complexity: At most 2 * min(N1, N2) applications of the comparison operation, where N1 = std::distance(first1, last) and N2 = std::distance(first2, last2).

The comparison operations in the parallel lexicographical_compare algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparison operations in the parallel lexicographical_compare algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

[Note]Note

Lexicographical comparision is an operation with the following properties

  • Two ranges are compared element by element

  • The first mismatching element defines which range is lexicographically less or greater than the other

  • If one range is a prefix of another, the shorter range is lexicographically less than the other

  • If two ranges have equivalent elements and are of the same length, then the ranges are lexicographically equal

  • An empty range is lexicographically less than any non-empty range

  • Two empty ranges are lexicographically equal

Parameters:

first1

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

first2

Refers to the beginning of the sequence of elements of the second range the algorithm will be applied to.

last1

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

last2

Refers to the end of the sequence of elements of the second range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

pred

Refers to the comparison function that the first and second ranges will be applied to

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter1

The type of the source iterators used for the first range (deduced). This iterator type must meet the requirements of an input iterator.

InIter2

The type of the source iterators used for the second range (deduced). This iterator type must meet the requirements of an input iterator.

Pred

comparison function object that returns true if the first argument is less than the second

Returns:

The lexicographically_compare algorithm returns a hpx::future<bool> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns bool otherwise. The lexicographically_compare algorithm returns true if the first range is lexicographically less, otherwise it returns false. range [first2, last2), it returns false.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename FwdIter, 
               typename Proj = util::projection_identity, 
               typename F = std::less<            typename std::remove_reference<                typename traits::projected_result_of<Proj, FwdIter>::type            >::type>, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< FwdIter >::value &&traits::is_projected< Proj, FwdIter >::value &&traits::is_indirect_callable< F,traits::projected< Proj, FwdIter >,traits::projected< Proj, FwdIter > >::value) > 
        unspecified min_element(ExPolicy &&, FwdIter, FwdIter, F && = F(), 
                                Proj && = Proj());
      template<typename ExPolicy, typename FwdIter, 
               typename Proj = util::projection_identity, 
               typename F = std::less<            typename std::remove_reference<                typename traits::projected_result_of<Proj, FwdIter>::type            >::type>, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< FwdIter >::value &&traits::is_projected< Proj, FwdIter >::value &&traits::is_indirect_callable< F,traits::projected< Proj, FwdIter >,traits::projected< Proj, FwdIter > >::value) > 
        unspecified max_element(ExPolicy &&, FwdIter, FwdIter, F && = F(), 
                                Proj && = Proj());
      template<typename ExPolicy, typename FwdIter, 
               typename Proj = util::projection_identity, 
               typename F = std::less<            typename std::remove_reference<                typename traits::projected_result_of<Proj, FwdIter>::type            >::type>, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< FwdIter >::value &&traits::is_projected< Proj, FwdIter >::value &&traits::is_indirect_callable< F, traits::projected< Proj, FwdIter >, traits::projected< Proj, FwdIter > >::value) > 
        unspecified minmax_element(ExPolicy &&, FwdIter, FwdIter, F && = F(), 
                                   Proj && = Proj());
    }
  }
}

Function template min_element

hpx::parallel::v1::min_element

Synopsis

// In header: <hpx/parallel/algorithms/minmax.hpp>


template<typename ExPolicy, typename FwdIter, 
         typename Proj = util::projection_identity, 
         typename F = std::less<            typename std::remove_reference<                typename traits::projected_result_of<Proj, FwdIter>::type            >::type>, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< FwdIter >::value &&traits::is_projected< Proj, FwdIter >::value &&traits::is_indirect_callable< F,traits::projected< Proj, FwdIter >,traits::projected< Proj, FwdIter > >::value) > 
  unspecified min_element(ExPolicy && policy, FwdIter first, FwdIter last, 
                          F && f = F(), Proj && proj = Proj());

Description

Finds the smallest element in the range [first, last) using the given comparison function f.

[Note]Note

Complexity: Exactly max(N-1, 0) comparisons, where N = std::distance(first, last).

The comparisons in the parallel min_element algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparisons in the parallel min_element algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

f

The binary predicate which returns true if the the left argument is less than the right element. The signature of the predicate function should be equivalent to the following:

bool pred(const Type1 &a, const Type1 &b);


The signature does not need to have const &, but the function must not modify the objects passed to it. The type Type1 must be such that objects of type FwdIter can be dereferenced and then implicitly converted to Type1.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

proj

Specifies the function (or function object) which will be invoked for each of the elements as a projection operation before the actual predicate is invoked.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of min_element requires F to meet the requirements of CopyConstructible.

FwdIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of a forward iterator.

Proj

The type of an optional projection function. This defaults to util::projection_identity

Returns:

The min_element algorithm returns a hpx::future<FwdIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns FwdIter otherwise. The min_element algorithm returns the iterator to the smallest element in the range [first, last). If several elements in the range are equivalent to the smallest element, returns the iterator to the first such element. Returns last if the range is empty.


Function template max_element

hpx::parallel::v1::max_element

Synopsis

// In header: <hpx/parallel/algorithms/minmax.hpp>


template<typename ExPolicy, typename FwdIter, 
         typename Proj = util::projection_identity, 
         typename F = std::less<            typename std::remove_reference<                typename traits::projected_result_of<Proj, FwdIter>::type            >::type>, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< FwdIter >::value &&traits::is_projected< Proj, FwdIter >::value &&traits::is_indirect_callable< F,traits::projected< Proj, FwdIter >,traits::projected< Proj, FwdIter > >::value) > 
  unspecified max_element(ExPolicy && policy, FwdIter first, FwdIter last, 
                          F && f = F(), Proj && proj = Proj());

Description

Finds the greatest element in the range [first, last) using the given comparison function f.

[Note]Note

Complexity: Exactly max(N-1, 0) comparisons, where N = std::distance(first, last).

The comparisons in the parallel max_element algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparisons in the parallel max_element algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

f

The binary predicate which returns true if the This argument is optional and defaults to std::less. the left argument is less than the right element. The signature of the predicate function should be equivalent to the following:

bool pred(const Type1 &a, const Type1 &b);


The signature does not need to have const &, but the function must not modify the objects passed to it. The type Type1 must be such that objects of type FwdIter can be dereferenced and then implicitly converted to Type1.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

proj

Specifies the function (or function object) which will be invoked for each of the elements as a projection operation before the actual predicate is invoked.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of max_element requires F to meet the requirements of CopyConstructible.

FwdIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of a forward iterator.

Proj

The type of an optional projection function. This defaults to util::projection_identity

Returns:

The max_element algorithm returns a hpx::future<FwdIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns FwdIter otherwise. The max_element algorithm returns the iterator to the smallest element in the range [first, last). If several elements in the range are equivalent to the smallest element, returns the iterator to the first such element. Returns last if the range is empty.


Function template minmax_element

hpx::parallel::v1::minmax_element

Synopsis

// In header: <hpx/parallel/algorithms/minmax.hpp>


template<typename ExPolicy, typename FwdIter, 
         typename Proj = util::projection_identity, 
         typename F = std::less<            typename std::remove_reference<                typename traits::projected_result_of<Proj, FwdIter>::type            >::type>, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< FwdIter >::value &&traits::is_projected< Proj, FwdIter >::value &&traits::is_indirect_callable< F, traits::projected< Proj, FwdIter >, traits::projected< Proj, FwdIter > >::value) > 
  unspecified minmax_element(ExPolicy && policy, FwdIter first, FwdIter last, 
                             F && f = F(), Proj && proj = Proj());

Description

Finds the greatest element in the range [first, last) using the given comparison function f.

[Note]Note

Complexity: At most max(floor(3/2*(N-1)), 0) applications of the predicate, where N = std::distance(first, last).

The comparisons in the parallel minmax_element algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparisons in the parallel minmax_element algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

f

The binary predicate which returns true if the the left argument is less than the right element. This argument is optional and defaults to std::less. The signature of the predicate function should be equivalent to the following:

bool pred(const Type1 &a, const Type1 &b);


The signature does not need to have const &, but the function must not modify the objects passed to it. The type Type1 must be such that objects of type FwdIter can be dereferenced and then implicitly converted to Type1.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

proj

Specifies the function (or function object) which will be invoked for each of the elements as a projection operation before the actual predicate is invoked.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of minmax_element requires F to meet the requirements of CopyConstructible.

FwdIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of a forward iterator.

Proj

The type of an optional projection function. This defaults to util::projection_identity

Returns:

The minmax_element algorithm returns a hpx::future<tagged_pair<tag::min(FwdIter), tag::max(FwdIter)> > if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns tagged_pair<tag::min(FwdIter), tag::max(FwdIter)> otherwise. The minmax_element algorithm returns a pair consisting of an iterator to the smallest element as the first element and an iterator to the greatest element as the second. Returns std::make_pair(first, first) if the range is empty. If several elements are equivalent to the smallest element, the iterator to the first such element is returned. If several elements are equivalent to the largest element, the iterator to the last such element is returned.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename Rng, 
               typename Proj = util::projection_identity, 
               typename F = std::less<            typename std::remove_reference<                typename traits::projected_range_result_of<Proj, Rng>::type            >::type>, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_projected_range< Proj, Rng >::value &&traits::is_indirect_callable< F,traits::projected_range< Proj, Rng >,traits::projected_range< Proj, Rng > >::value) > 
        unspecified min_element(ExPolicy &&, Rng &&, F && = F(), 
                                Proj && = Proj());
      template<typename ExPolicy, typename Rng, 
               typename Proj = util::projection_identity, 
               typename F = std::less<            typename std::remove_reference<                typename traits::projected_range_result_of<Proj, Rng>::type            >::type>, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_projected_range< Proj, Rng >::value &&traits::is_indirect_callable< F,traits::projected_range< Proj, Rng >,traits::projected_range< Proj, Rng > >::value) > 
        unspecified max_element(ExPolicy &&, Rng &&, F && = F(), 
                                Proj && = Proj());
      template<typename ExPolicy, typename Rng, 
               typename Proj = util::projection_identity, 
               typename F = std::less<            typename std::remove_reference<                typename traits::projected_range_result_of<Proj, Rng>::type            >::type>, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_projected_range< Proj, Rng >::value &&traits::is_indirect_callable< F,traits::projected_range< Proj, Rng >,traits::projected_range< Proj, Rng > >::value) > 
        unspecified minmax_element(ExPolicy &&, Rng &&, F && = F(), 
                                   Proj && = Proj());
    }
  }
}

Function template min_element

hpx::parallel::v1::min_element

Synopsis

// In header: <hpx/parallel/container_algorithms/minmax.hpp>


template<typename ExPolicy, typename Rng, 
         typename Proj = util::projection_identity, 
         typename F = std::less<            typename std::remove_reference<                typename traits::projected_range_result_of<Proj, Rng>::type            >::type>, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_projected_range< Proj, Rng >::value &&traits::is_indirect_callable< F,traits::projected_range< Proj, Rng >,traits::projected_range< Proj, Rng > >::value) > 
  unspecified min_element(ExPolicy && policy, Rng && rng, F && f = F(), 
                          Proj && proj = Proj());

Description

Finds the smallest element in the range [first, last) using the given comparison function f.

[Note]Note

Complexity: Exactly max(N-1, 0) comparisons, where N = std::distance(first, last).

The comparisons in the parallel min_element algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparisons in the parallel min_element algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

f

The binary predicate which returns true if the the left argument is less than the right element. The signature of the predicate function should be equivalent to the following:

bool pred(const Type1 &a, const Type1 &b);


The signature does not need to have const &, but the function must not modify the objects passed to it. The type Type1 must be such that objects of type FwdIter can be dereferenced and then implicitly converted to Type1.

policy

The execution policy to use for the scheduling of the iterations.

proj

Specifies the function (or function object) which will be invoked for each of the elements as a projection operation before the actual predicate is invoked.

rng

Refers to the sequence of elements the algorithm will be applied to.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of min_element requires F to meet the requirements of CopyConstructible.

Proj

The type of an optional projection function. This defaults to util::projection_identity

Rng

The type of the source range used (deduced). The iterators extracted from this range type must meet the requirements of an forward iterator.

Returns:

The min_element algorithm returns a hpx::future<FwdIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns FwdIter otherwise. The min_element algorithm returns the iterator to the smallest element in the range [first, last). If several elements in the range are equivalent to the smallest element, returns the iterator to the first such element. Returns last if the range is empty.


Function template max_element

hpx::parallel::v1::max_element

Synopsis

// In header: <hpx/parallel/container_algorithms/minmax.hpp>


template<typename ExPolicy, typename Rng, 
         typename Proj = util::projection_identity, 
         typename F = std::less<            typename std::remove_reference<                typename traits::projected_range_result_of<Proj, Rng>::type            >::type>, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_projected_range< Proj, Rng >::value &&traits::is_indirect_callable< F,traits::projected_range< Proj, Rng >,traits::projected_range< Proj, Rng > >::value) > 
  unspecified max_element(ExPolicy && policy, Rng && rng, F && f = F(), 
                          Proj && proj = Proj());

Description

Finds the greatest element in the range [first, last) using the given comparison function f.

[Note]Note

Complexity: Exactly max(N-1, 0) comparisons, where N = std::distance(first, last).

The comparisons in the parallel max_element algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparisons in the parallel max_element algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

f

The binary predicate which returns true if the This argument is optional and defaults to std::less. the left argument is less than the right element. The signature of the predicate function should be equivalent to the following:

bool pred(const Type1 &a, const Type1 &b);


The signature does not need to have const &, but the function must not modify the objects passed to it. The type Type1 must be such that objects of type FwdIter can be dereferenced and then implicitly converted to Type1.

policy

The execution policy to use for the scheduling of the iterations.

proj

Specifies the function (or function object) which will be invoked for each of the elements as a projection operation before the actual predicate is invoked.

rng

Refers to the sequence of elements the algorithm will be applied to.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of max_element requires F to meet the requirements of CopyConstructible.

Proj

The type of an optional projection function. This defaults to util::projection_identity

Rng

The type of the source range used (deduced). The iterators extracted from this range type must meet the requirements of an forward iterator.

Returns:

The max_element algorithm returns a hpx::future<FwdIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns FwdIter otherwise. The max_element algorithm returns the iterator to the smallest element in the range [first, last). If several elements in the range are equivalent to the smallest element, returns the iterator to the first such element. Returns last if the range is empty.


Function template minmax_element

hpx::parallel::v1::minmax_element

Synopsis

// In header: <hpx/parallel/container_algorithms/minmax.hpp>


template<typename ExPolicy, typename Rng, 
         typename Proj = util::projection_identity, 
         typename F = std::less<            typename std::remove_reference<                typename traits::projected_range_result_of<Proj, Rng>::type            >::type>, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_projected_range< Proj, Rng >::value &&traits::is_indirect_callable< F,traits::projected_range< Proj, Rng >,traits::projected_range< Proj, Rng > >::value) > 
  unspecified minmax_element(ExPolicy && policy, Rng && rng, F && f = F(), 
                             Proj && proj = Proj());

Description

Finds the greatest element in the range [first, last) using the given comparison function f.

[Note]Note

Complexity: At most max(floor(3/2*(N-1)), 0) applications of the predicate, where N = std::distance(first, last).

The comparisons in the parallel minmax_element algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparisons in the parallel minmax_element algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

f

The binary predicate which returns true if the the left argument is less than the right element. This argument is optional and defaults to std::less. The signature of the predicate function should be equivalent to the following:

bool pred(const Type1 &a, const Type1 &b);


The signature does not need to have const &, but the function must not modify the objects passed to it. The type Type1 must be such that objects of type FwdIter can be dereferenced and then implicitly converted to Type1.

policy

The execution policy to use for the scheduling of the iterations.

proj

Specifies the function (or function object) which will be invoked for each of the elements as a projection operation before the actual predicate is invoked.

rng

Refers to the sequence of elements the algorithm will be applied to.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of minmax_element requires F to meet the requirements of CopyConstructible.

Proj

The type of an optional projection function. This defaults to util::projection_identity

Rng

The type of the source range used (deduced). The iterators extracted from this range type must meet the requirements of an forward iterator.

Returns:

The minmax_element algorithm returns a hpx::future<tagged_pair<tag::min(FwdIter), tag::max(FwdIter)> > if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns tagged_pair<tag::min(FwdIter), tag::max(FwdIter)> otherwise. The minmax_element algorithm returns a pair consisting of an iterator to the smallest element as the first element and an iterator to the greatest element as the second. Returns std::make_pair(first, first) if the range is empty. If several elements are equivalent to the smallest element, the iterator to the first such element is returned. If several elements are equivalent to the largest element, the iterator to the last such element is returned.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter1, typename InIter2> 
        unspecified mismatch(ExPolicy &&, InIter1, InIter1, InIter2, InIter2);
      template<typename ExPolicy, typename InIter1, typename InIter2, 
               typename F> 
        unspecified mismatch(ExPolicy &&, InIter1, InIter1, InIter2, InIter2, 
                             F &&);
      template<typename ExPolicy, typename InIter1, typename InIter2> 
        unspecified mismatch(ExPolicy &&, InIter1, InIter1, InIter2);
      template<typename ExPolicy, typename InIter1, typename InIter2, 
               typename F> 
        unspecified mismatch(ExPolicy &&, InIter1, InIter1, InIter2, F &&);
    }
  }
}

Function template mismatch

hpx::parallel::v1::mismatch

Synopsis

// In header: <hpx/parallel/algorithms/mismatch.hpp>


template<typename ExPolicy, typename InIter1, typename InIter2> 
  unspecified mismatch(ExPolicy && policy, InIter1 first1, InIter1 last1, 
                       InIter2 first2, InIter2 last2);

Description

Returns true if the range [first1, last1) is mismatch to the range [first2, last2), and false otherwise.

[Note]Note

Complexity: At most min(last1 - first1, last2 - first2) applications of the operator==(). If InIter1 and InIter2 meet the requirements of RandomAccessIterator and (last1 - first1) != (last2 - first2) then no applications of the operator==() are made.

The comparison operations in the parallel mismatch algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparison operations in the parallel mismatch algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

[Note]Note

The two ranges are considered mismatch if, for every iterator i in the range [first1,last1), *i mismatchs *(first2 + (i - first1)). This overload of mismatch uses operator== to determine if two elements are mismatch.

Parameters:

first1

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

first2

Refers to the beginning of the sequence of elements of the second range the algorithm will be applied to.

last1

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

last2

Refers to the end of the sequence of elements of the second range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter1

The type of the source iterators used for the first range (deduced). This iterator type must meet the requirements of an input iterator.

InIter2

The type of the source iterators used for the second range (deduced). This iterator type must meet the requirements of an input iterator.

Returns:

The mismatch algorithm returns a hpx::future<bool> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns bool otherwise. The mismatch algorithm returns true if the elements in the two ranges are mismatch, otherwise it returns false. If the length of the range [first1, last1) does not mismatch the length of the range [first2, last2), it returns false.


Function template mismatch

hpx::parallel::v1::mismatch

Synopsis

// In header: <hpx/parallel/algorithms/mismatch.hpp>


template<typename ExPolicy, typename InIter1, typename InIter2, typename F> 
  unspecified mismatch(ExPolicy && policy, InIter1 first1, InIter1 last1, 
                       InIter2 first2, InIter2 last2, F && f);

Description

Returns true if the range [first1, last1) is mismatch to the range [first2, last2), and false otherwise.

[Note]Note

Complexity: At most min(last1 - first1, last2 - first2) applications of the predicate f. If InIter1 and InIter2 meet the requirements of RandomAccessIterator and (last1 - first1) != (last2 - first2) then no applications of the predicate f are made.

The comparison operations in the parallel mismatch algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparison operations in the parallel mismatch algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

[Note]Note

The two ranges are considered mismatch if, for every iterator i in the range [first1,last1), *i mismatchs *(first2 + (i - first1)). This overload of mismatch uses operator== to determine if two elements are mismatch.

Parameters:

f

The binary predicate which returns true if the elements should be treated as mismatch. The signature of the predicate function should be equivalent to the following:

bool pred(const Type1 &a, const Type2 &b);


The signature does not need to have const &, but the function must not modify the objects passed to it. The types Type1 and Type2 must be such that objects of types InIter1 and InIter2 can be dereferenced and then implicitly converted to Type1 and Type2 respectively

first1

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

first2

Refers to the beginning of the sequence of elements of the second range the algorithm will be applied to.

last1

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

last2

Refers to the end of the sequence of elements of the second range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of mismatch requires F to meet the requirements of CopyConstructible.

InIter1

The type of the source iterators used for the first range (deduced). This iterator type must meet the requirements of an input iterator.

InIter2

The type of the source iterators used for the second range (deduced). This iterator type must meet the requirements of an input iterator.

Returns:

The mismatch algorithm returns a hpx::future<bool> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns bool otherwise. The mismatch algorithm returns true if the elements in the two ranges are mismatch, otherwise it returns false. If the length of the range [first1, last1) does not mismatch the length of the range [first2, last2), it returns false.


Function template mismatch

hpx::parallel::v1::mismatch

Synopsis

// In header: <hpx/parallel/algorithms/mismatch.hpp>


template<typename ExPolicy, typename InIter1, typename InIter2> 
  unspecified mismatch(ExPolicy && policy, InIter1 first1, InIter1 last1, 
                       InIter2 first2);

Description

Returns std::pair with iterators to the first two non-equivalent elements.

[Note]Note

Complexity: At most last1 - first1 applications of the operator==().

The comparison operations in the parallel mismatch algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparison operations in the parallel mismatch algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

first1

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

first2

Refers to the beginning of the sequence of elements of the second range the algorithm will be applied to.

last1

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter1

The type of the source iterators used for the first range (deduced). This iterator type must meet the requirements of an input iterator.

InIter2

The type of the source iterators used for the second range (deduced). This iterator type must meet the requirements of an input iterator.

Returns:

The mismatch algorithm returns a hpx::future<std::pair<InIter1, InIter2> > if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns std::pair<InIter1, InIter2> otherwise. The mismatch algorithm returns the first mismatching pair of elements from two ranges: one defined by [first1, last1) and another defined by [first2, last2).


Function template mismatch

hpx::parallel::v1::mismatch

Synopsis

// In header: <hpx/parallel/algorithms/mismatch.hpp>


template<typename ExPolicy, typename InIter1, typename InIter2, typename F> 
  unspecified mismatch(ExPolicy && policy, InIter1 first1, InIter1 last1, 
                       InIter2 first2, F && f);

Description

Returns std::pair with iterators to the first two non-equivalent elements.

[Note]Note

Complexity: At most last1 - first1 applications of the predicate f.

The comparison operations in the parallel mismatch algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparison operations in the parallel mismatch algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

f

The binary predicate which returns true if the elements should be treated as mismatch. The signature of the predicate function should be equivalent to the following:

bool pred(const Type1 &a, const Type2 &b);


The signature does not need to have const &, but the function must not modify the objects passed to it. The types Type1 and Type2 must be such that objects of types InIter1 and InIter2 can be dereferenced and then implicitly converted to Type1 and Type2 respectively

first1

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

first2

Refers to the beginning of the sequence of elements of the second range the algorithm will be applied to.

last1

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of mismatch requires F to meet the requirements of CopyConstructible.

InIter1

The type of the source iterators used for the first range (deduced). This iterator type must meet the requirements of an input iterator.

InIter2

The type of the source iterators used for the second range (deduced). This iterator type must meet the requirements of an input iterator.

Returns:

The mismatch algorithm returns a hpx::future<std::pair<InIter1, InIter2> > if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns std::pair<InIter1, InIter2> otherwise. The mismatch algorithm returns the first mismatching pair of elements from two ranges: one defined by [first1, last1) and another defined by [first2, last2).

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter, typename OutIter> 
        unspecified move(ExPolicy &&, InIter, InIter, OutIter);
    }
  }
}

Function template move

hpx::parallel::v1::move

Synopsis

// In header: <hpx/parallel/algorithms/move.hpp>


template<typename ExPolicy, typename InIter, typename OutIter> 
  unspecified move(ExPolicy && policy, InIter first, InIter last, 
                   OutIter dest);

Description

Moves the elements in the range [first, last), to another range beginning at dest. After this operation the elements in the moved-from range will still contain valid values of the appropriate type, but not necessarily the same values as before the move.

[Note]Note

Complexity: Performs exactly last - first move assignments.

The move assignments in the parallel move algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The move assignments in the parallel move algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest

Refers to the beginning of the destination range.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the move assignments.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Returns:

The move algorithm returns a hpx::future<OutIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns OutIter otherwise. The move algorithm returns the output iterator to the element in the destination range, one past the last element copied.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter, typename T, typename F> 
        unspecified reduce(ExPolicy &&, InIter, InIter, T, F &&);
      template<typename ExPolicy, typename InIter, typename T> 
        unspecified reduce(ExPolicy &&, InIter, InIter, T);
      template<typename ExPolicy, typename InIter> 
        unspecified reduce(ExPolicy &&, InIter, InIter);
    }
  }
}

Function template reduce

hpx::parallel::v1::reduce

Synopsis

// In header: <hpx/parallel/algorithms/reduce.hpp>


template<typename ExPolicy, typename InIter, typename T, typename F> 
  unspecified reduce(ExPolicy && policy, InIter first, InIter last, T init, 
                     F && f);

Description

Returns GENERALIZED_SUM(f, init, *first, ..., *(first + (last - first) - 1)).

[Note]Note

Complexity: O(last - first) applications of the predicate f.

The reduce operations in the parallel reduce algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The reduce operations in the parallel copy_if algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

[Note]Note

GENERALIZED_SUM(op, a1, ..., aN) is defined as follows:

  • a1 when N is 1

  • op(GENERALIZED_SUM(op, b1, ..., bK), GENERALIZED_SUM(op, bM, ..., bN)), where:

    • b1, ..., bN may be any permutation of a1, ..., aN and

    • 1 < K+1 = M <= N.

The difference between reduce and accumulate is that the behavior of reduce may be non-deterministic for non-associative or non-commutative binary predicate.

Parameters:

f

Specifies the function (or function object) which will be invoked for each of the elements in the sequence specified by [first, last). This is a binary predicate. The signature of this predicate should be equivalent to:

Ret fun(const Type1 &a, const Type1 &b);


The signature does not need to have const&. The types Type1 Ret must be such that an object of type InIter can be dereferenced and then implicitly converted to any of those types.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

init

The initial value for the generalized sum.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of copy_if requires F to meet the requirements of CopyConstructible.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

T

The type of the value to be used as initial (and intermediate) values (deduced).

Returns:

The reduce algorithm returns a hpx::future<T> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns T otherwise. The reduce algorithm returns the result of the generalized sum over the elements given by the input range [first, last).


Function template reduce

hpx::parallel::v1::reduce

Synopsis

// In header: <hpx/parallel/algorithms/reduce.hpp>


template<typename ExPolicy, typename InIter, typename T> 
  unspecified reduce(ExPolicy && policy, InIter first, InIter last, T init);

Description

Returns GENERALIZED_SUM(+, init, *first, ..., *(first + (last - first) - 1)).

[Note]Note

Complexity: O(last - first) applications of the operator+().

The reduce operations in the parallel reduce algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The reduce operations in the parallel copy_if algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

[Note]Note

GENERALIZED_SUM(+, a1, ..., aN) is defined as follows:

  • a1 when N is 1

  • op(GENERALIZED_SUM(+, b1, ..., bK), GENERALIZED_SUM(+, bM, ..., bN)), where:

    • b1, ..., bN may be any permutation of a1, ..., aN and

    • 1 < K+1 = M <= N.

The difference between reduce and accumulate is that the behavior of reduce may be non-deterministic for non-associative or non-commutative binary predicate.

Parameters:

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

init

The initial value for the generalized sum.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

T

The type of the value to be used as initial (and intermediate) values (deduced).

Returns:

The reduce algorithm returns a hpx::future<T> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns T otherwise. The reduce algorithm returns the result of the generalized sum (applying operator+()) over the elements given by the input range [first, last).


Function template reduce

hpx::parallel::v1::reduce

Synopsis

// In header: <hpx/parallel/algorithms/reduce.hpp>


template<typename ExPolicy, typename InIter> 
  unspecified reduce(ExPolicy && policy, InIter first, InIter last);

Description

Returns GENERALIZED_SUM(+, T(), *first, ..., *(first + (last - first) - 1)).

[Note]Note

Complexity: O(last - first) applications of the operator+().

The reduce operations in the parallel reduce algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The reduce operations in the parallel copy_if algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

[Note]Note

The type of the initial value (and the result type) T is determined from the value_type of the used InIter.

GENERALIZED_SUM(+, a1, ..., aN) is defined as follows:

  • a1 when N is 1

  • op(GENERALIZED_SUM(+, b1, ..., bK), GENERALIZED_SUM(+, bM, ..., bN)), where:

    • b1, ..., bN may be any permutation of a1, ..., aN and

    • 1 < K+1 = M <= N.

The difference between reduce and accumulate is that the behavior of reduce may be non-deterministic for non-associative or non-commutative binary predicate.

Parameters:

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

Returns:

The reduce algorithm returns a hpx::future<T> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns T otherwise (where T is the the value_type of InIter). The reduce algorithm returns the result of the generalized sum (applying operator+()) over the elements given by the input range [first, last).

namespace hpx {
  namespace lcos {
    template<typename Action, typename ReduceOp, typename ArgN, ... > 
      hpx::future< decltype(Action(hpx::id_type, ArgN,...))> 
      reduce(std::vector< hpx::id_type > const &, ReduceOp &&, ArgN, ...);
    template<typename Action, typename ReduceOp, typename ArgN, ... > 
      hpx::future< decltype(Action(hpx::id_type, ArgN,..., std::size_t))> 
      reduce_with_index(std::vector< hpx::id_type > const &, ReduceOp &&, 
                        ArgN, ...);
  }
}

Function template reduce

hpx::lcos::reduce — Perform a distributed reduction operation.

Synopsis

// In header: <hpx/lcos/reduce.hpp>


template<typename Action, typename ReduceOp, typename ArgN, ... > 
  hpx::future< decltype(Action(hpx::id_type, ArgN,...))> 
  reduce(std::vector< hpx::id_type > const & ids, ReduceOp && reduce_op, 
         ArgN argN, ...);

Description

The function hpx::lcos::reduce performs a distributed reduction operation over results returned from action invocations on a given set of global identifiers. The action can be either a plain action (in which case the global identifiers have to refer to localities) or a component action (in which case the global identifiers have to refer to instances of a component type which exposes the action.

Parameters:

argN

[in] Any number of arbitrary arguments (passed by by const reference) which will be forwarded to the action invocation.

ids

[in] A list of global identifiers identifying the target objects for which the given action will be invoked.

reduce_op

[in] A binary function expecting two results as returned from the action invocations. The function (or function object) is expected to return the result of the reduction operation performed on its arguments.

Returns:

This function returns a future representing the result of the overall reduction operation.


Function template reduce_with_index

hpx::lcos::reduce_with_index — Perform a distributed reduction operation.

Synopsis

// In header: <hpx/lcos/reduce.hpp>


template<typename Action, typename ReduceOp, typename ArgN, ... > 
  hpx::future< decltype(Action(hpx::id_type, ArgN,..., std::size_t))> 
  reduce_with_index(std::vector< hpx::id_type > const & ids, 
                    ReduceOp && reduce_op, ArgN argN, ...);

Description

The function hpx::lcos::reduce_with_index performs a distributed reduction operation over results returned from action invocations on a given set of global identifiers. The action can be either plain action (in which case the global identifiers have to refer to localities) or a component action (in which case the global identifiers have to refer to instances of a component type which exposes the action.

The function passes the index of the global identifier in the given list of identifiers as the last argument to the action.

Parameters:

argN

[in] Any number of arbitrary arguments (passed by by const reference) which will be forwarded to the action invocation.

ids

[in] A list of global identifiers identifying the target objects for which the given action will be invoked.

reduce_op

[in] A binary function expecting two results as returned from the action invocations. The function (or function object) is expected to return the result of the reduction operation performed on its arguments.

Returns:

This function returns a future representing the result of the overall reduction operation.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter, typename OutIter, 
               typename T, typename Proj = util::projection_identity, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< InIter >::value &&traits::is_iterator< OutIter >::value &&traits::is_projected< Proj, InIter >::value &&traits::is_indirect_callable< std::equal_to< T >,traits::projected< Proj, InIter >,traits::projected< Proj, T const * > >::value) > 
        unspecified remove_copy(ExPolicy &&, InIter, InIter, OutIter, 
                                T const &, Proj && = Proj());
      template<typename ExPolicy, typename InIter, typename OutIter, 
               typename F, typename Proj = util::projection_identity, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< InIter >::value &&traits::is_projected< Proj, InIter >::value &&traits::is_indirect_callable< F, traits::projected< Proj, InIter > >::value &&traits::is_iterator< OutIter >::value) > 
        unspecified remove_copy_if(ExPolicy &&, InIter, InIter, OutIter, F &&, 
                                   Proj && = Proj());
    }
  }
}

Function template remove_copy

hpx::parallel::v1::remove_copy

Synopsis

// In header: <hpx/parallel/algorithms/remove_copy.hpp>


template<typename ExPolicy, typename InIter, typename OutIter, typename T, 
         typename Proj = util::projection_identity, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< InIter >::value &&traits::is_iterator< OutIter >::value &&traits::is_projected< Proj, InIter >::value &&traits::is_indirect_callable< std::equal_to< T >,traits::projected< Proj, InIter >,traits::projected< Proj, T const * > >::value) > 
  unspecified remove_copy(ExPolicy && policy, InIter first, InIter last, 
                          OutIter dest, T const & val, Proj && proj = Proj());

Description

Copies the elements in the range, defined by [first, last), to another range beginning at dest. Copies only the elements for which the comparison operator returns false when compare to val. The order of the elements that are not removed is preserved.

Effects: Copies all the elements referred to by the iterator it in the range [first,last) for which the following corresponding conditions do not hold: INVOKE(proj, *it) == value

[Note]Note

Complexity: Performs not more than last - first assignments, exactly last - first applications of the predicate f.

The assignments in the parallel remove_copy algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel remove_copy algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest

Refers to the beginning of the destination range.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

proj

Specifies the function (or function object) which will be invoked for each of the elements as a projection operation before the actual predicate is invoked.

val

Value to be removed.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Proj

The type of an optional projection function. This defaults to util::projection_identity

T

The type that the result of dereferencing InIter is compared to.

Returns:

The remove_copy algorithm returns a hpx::future<tagged_pair<tag::in(InIter), tag::out(OutIter)> > if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns tagged_pair<tag::in(InIter), tag::out(OutIter)> otherwise. The copy algorithm returns the pair of the input iterator forwarded to the first element after the last in the input sequence and the output iterator to the element in the destination range, one past the last element copied.


Function template remove_copy_if

hpx::parallel::v1::remove_copy_if

Synopsis

// In header: <hpx/parallel/algorithms/remove_copy.hpp>


template<typename ExPolicy, typename InIter, typename OutIter, typename F, 
         typename Proj = util::projection_identity, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< InIter >::value &&traits::is_projected< Proj, InIter >::value &&traits::is_indirect_callable< F, traits::projected< Proj, InIter > >::value &&traits::is_iterator< OutIter >::value) > 
  unspecified remove_copy_if(ExPolicy && policy, InIter first, InIter last, 
                             OutIter dest, F && f, Proj && proj = Proj());

Description

Copies the elements in the range, defined by [first, last), to another range beginning at dest. Copies only the elements for which the predicate f returns false. The order of the elements that are not removed is preserved.

Effects: Copies all the elements referred to by the iterator it in the range [first,last) for which the following corresponding conditions do not hold: INVOKE(pred, INVOKE(proj, *it)) != false.

[Note]Note

Complexity: Performs not more than last - first assignments, exactly last - first applications of the predicate f.

The assignments in the parallel remove_copy_if algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel remove_copy_if algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest

Refers to the beginning of the destination range.

f

Specifies the function (or function object) which will be invoked for each of the elements in the sequence specified by [first, last).This is an unary predicate which returns true for the elements to be removed. The signature of this predicate should be equivalent to:

bool pred(const Type &a);


The signature does not need to have const&, but the function must not modify the objects passed to it. The type Type must be such that an object of type InIter can be dereferenced and then implicitly converted to Type.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

proj

Specifies the function (or function object) which will be invoked for each of the elements as a projection operation before the actual predicate is invoked.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of copy_if requires F to meet the requirements of CopyConstructible.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Proj

The type of an optional projection function. This defaults to util::projection_identity

Returns:

The remove_copy_if algorithm returns a hpx::future<tagged_pair<tag::in(InIter), tag::out(OutIter)> > if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns tagged_pair<tag::in(InIter), tag::out(OutIter)> otherwise. The copy algorithm returns the pair of the input iterator forwarded to the first element after the last in the input sequence and the output iterator to the element in the destination range, one past the last element copied.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename Rng, typename OutIter, typename T, 
               typename Proj = util::projection_identity, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_iterator< OutIter >::value &&traits::is_projected_range< Proj, Rng >::value &&traits::is_indirect_callable< std::equal_to< T >,traits::projected_range< Proj, Rng >,traits::projected< Proj, T const * > >::value) > 
        unspecified remove_copy(ExPolicy &&, Rng &&, OutIter, T const &, 
                                Proj && = Proj());
      template<typename ExPolicy, typename Rng, typename OutIter, typename F, 
               typename Proj = util::projection_identity, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_iterator< OutIter >::value &&traits::is_projected_range< Proj, Rng >::value &&traits::is_indirect_callable< F, traits::projected_range< Proj, Rng > >::value) > 
        unspecified remove_copy_if(ExPolicy &&, Rng &&, OutIter, F &&, 
                                   Proj && = Proj());
    }
  }
}

Function template remove_copy

hpx::parallel::v1::remove_copy

Synopsis

// In header: <hpx/parallel/container_algorithms/remove_copy.hpp>


template<typename ExPolicy, typename Rng, typename OutIter, typename T, 
         typename Proj = util::projection_identity, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_iterator< OutIter >::value &&traits::is_projected_range< Proj, Rng >::value &&traits::is_indirect_callable< std::equal_to< T >,traits::projected_range< Proj, Rng >,traits::projected< Proj, T const * > >::value) > 
  unspecified remove_copy(ExPolicy && policy, Rng && rng, OutIter dest, 
                          T const & val, Proj && proj = Proj());

Description

Copies the elements in the range, defined by [first, last), to another range beginning at dest. Copies only the elements for which the comparison operator returns false when compare to val. The order of the elements that are not removed is preserved.

Effects: Copies all the elements referred to by the iterator it in the range [first,last) for which the following corresponding conditions do not hold: INVOKE(proj, *it) == value

[Note]Note

Complexity: Performs not more than last - first assignments, exactly last - first applications of the predicate f.

The assignments in the parallel remove_copy algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel remove_copy algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest

Refers to the beginning of the destination range.

policy

The execution policy to use for the scheduling of the iterations.

proj

Specifies the function (or function object) which will be invoked for each of the elements as a projection operation before the actual predicate is invoked.

rng

Refers to the sequence of elements the algorithm will be applied to.

val

Value to be removed.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Proj

The type of an optional projection function. This defaults to util::projection_identity

Rng

The type of the source range used (deduced). The iterators extracted from this range type must meet the requirements of an input iterator.

T

The type that the result of dereferencing InIter is compared to.

Returns:

The remove_copy algorithm returns a hpx::future<tagged_pair<tag::in(InIter), tag::out(OutIter)> > if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns tagged_pair<tag::in(InIter), tag::out(OutIter)> otherwise. The copy algorithm returns the pair of the input iterator forwarded to the first element after the last in the input sequence and the output iterator to the element in the destination range, one past the last element copied.


Function template remove_copy_if

hpx::parallel::v1::remove_copy_if

Synopsis

// In header: <hpx/parallel/container_algorithms/remove_copy.hpp>


template<typename ExPolicy, typename Rng, typename OutIter, typename F, 
         typename Proj = util::projection_identity, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_iterator< OutIter >::value &&traits::is_projected_range< Proj, Rng >::value &&traits::is_indirect_callable< F, traits::projected_range< Proj, Rng > >::value) > 
  unspecified remove_copy_if(ExPolicy && policy, Rng && rng, OutIter dest, 
                             F && f, Proj && proj = Proj());

Description

Copies the elements in the range, defined by [first, last), to another range beginning at dest. Copies only the elements for which the predicate f returns false. The order of the elements that are not removed is preserved.

Effects: Copies all the elements referred to by the iterator it in the range [first,last) for which the following corresponding conditions do not hold: INVOKE(pred, INVOKE(proj, *it)) != false.

[Note]Note

Complexity: Performs not more than last - first assignments, exactly last - first applications of the predicate f.

The assignments in the parallel remove_copy_if algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel remove_copy_if algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest

Refers to the beginning of the destination range.

f

Specifies the function (or function object) which will be invoked for each of the elements in the sequence specified by [first, last).This is an unary predicate which returns true for the elements to be removed. The signature of this predicate should be equivalent to:

bool pred(const Type &a);


The signature does not need to have const&, but the function must not modify the objects passed to it. The type Type must be such that an object of type InIter can be dereferenced and then implicitly converted to Type.

policy

The execution policy to use for the scheduling of the iterations.

proj

Specifies the function (or function object) which will be invoked for each of the elements as a projection operation before the actual predicate is invoked.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of copy_if requires F to meet the requirements of CopyConstructible.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Proj

The type of an optional projection function. This defaults to util::projection_identity

Returns:

The remove_copy_if algorithm returns a hpx::future<tagged_pair<tag::in(InIter), tag::out(OutIter)> > if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns tagged_pair<tag::in(InIter), tag::out(OutIter)> otherwise. The copy algorithm returns the pair of the input iterator forwarded to the first element after the last in the input sequence and the output iterator to the element in the destination range, one past the last element copied.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename FwdIter, typename T1, typename T2, 
               typename Proj = util::projection_identity, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< FwdIter >::value &&traits::is_projected< Proj, FwdIter >::value &&traits::is_indirect_callable< std::equal_to< T1 >,traits::projected< Proj, FwdIter >,traits::projected< Proj, T1 const * > >::value) > 
        unspecified replace(ExPolicy &&, FwdIter, FwdIter, T1 const &, 
                            T2 const &, Proj && = Proj());
      template<typename ExPolicy, typename FwdIter, typename F, typename T, 
               typename Proj = util::projection_identity, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< FwdIter >::value &&traits::is_projected< Proj, FwdIter >::value &&traits::is_indirect_callable< F, traits::projected< Proj, FwdIter > >::value) > 
        unspecified replace_if(ExPolicy &&, FwdIter, FwdIter, F &&, T const &, 
                               Proj && = Proj());
      template<typename ExPolicy, typename InIter, typename OutIter, 
               typename T1, typename T2, 
               typename Proj = util::projection_identity, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< InIter >::value &&traits::is_projected< Proj, InIter >::value &&traits::is_indirect_callable< std::equal_to< T1 >,traits::projected< Proj, InIter >,traits::projected< Proj, T1 const * > >::value) > 
        unspecified replace_copy(ExPolicy &&, InIter, InIter, OutIter, 
                                 T1 const &, T2 const &, Proj && = Proj());
      template<typename ExPolicy, typename InIter, typename OutIter, 
               typename F, typename T, 
               typename Proj = util::projection_identity, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< InIter >::value &&traits::is_projected< Proj, InIter >::value &&traits::is_indirect_callable< F, traits::projected< Proj, InIter > >::value) > 
        unspecified replace_copy_if(ExPolicy &&, InIter, InIter, OutIter, 
                                    F &&, T const &, Proj && = Proj());
    }
  }
}

Function template replace

hpx::parallel::v1::replace

Synopsis

// In header: <hpx/parallel/algorithms/replace.hpp>


template<typename ExPolicy, typename FwdIter, typename T1, typename T2, 
         typename Proj = util::projection_identity, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< FwdIter >::value &&traits::is_projected< Proj, FwdIter >::value &&traits::is_indirect_callable< std::equal_to< T1 >,traits::projected< Proj, FwdIter >,traits::projected< Proj, T1 const * > >::value) > 
  unspecified replace(ExPolicy && policy, FwdIter first, FwdIter last, 
                      T1 const & old_value, T2 const & new_value, 
                      Proj && proj = Proj());

Description

Replaces all elements satisfying specific criteria with new_value in the range [first, last).

Effects: Substitutes elements referred by the iterator it in the range [first, last) with new_value, when the following corresponding conditions hold: INVOKE(proj, *it) == old_value

[Note]Note

Complexity: Performs exactly last - first assignments.

The assignments in the parallel replace algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel replace algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

new_value

Refers to the new value to use as the replacement.

old_value

Refers to the old value of the elements to replace.

policy

The execution policy to use for the scheduling of the iterations.

proj

Specifies the function (or function object) which will be invoked for each of the elements as a projection operation before the actual predicate is invoked.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

FwdIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of a forward iterator.

Proj

The type of an optional projection function. This defaults to util::projection_identity

T1

The type of the old value to replace (deduced).

T2

The type of the new values to replace (deduced).

Returns:

The replace algorithm returns a hpx::future<FwdIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns void otherwise. It returns last.


Function template replace_if

hpx::parallel::v1::replace_if

Synopsis

// In header: <hpx/parallel/algorithms/replace.hpp>


template<typename ExPolicy, typename FwdIter, typename F, typename T, 
         typename Proj = util::projection_identity, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< FwdIter >::value &&traits::is_projected< Proj, FwdIter >::value &&traits::is_indirect_callable< F, traits::projected< Proj, FwdIter > >::value) > 
  unspecified replace_if(ExPolicy && policy, FwdIter first, FwdIter last, 
                         F && f, T const & new_value, Proj && proj = Proj());

Description

Replaces all elements satisfying specific criteria (for which predicate f returns true) with new_value in the range [first, last).

Effects: Substitutes elements referred by the iterator it in the range [first, last) with new_value, when the following corresponding conditions hold: INVOKE(f, INVOKE(proj, *it)) != false

[Note]Note

Complexity: Performs exactly last - first applications of the predicate.

The assignments in the parallel replace_if algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel replace_if algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

f

Specifies the function (or function object) which will be invoked for each of the elements in the sequence specified by [first, last).This is an unary predicate which returns true for the elements which need to replaced. The signature of this predicate should be equivalent to:

bool pred(const Type &a);


The signature does not need to have const&, but the function must not modify the objects passed to it. The type Type must be such that an object of type FwdIter can be dereferenced and then implicitly converted to Type.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

new_value

Refers to the new value to use as the replacement.

policy

The execution policy to use for the scheduling of the iterations.

proj

Specifies the function (or function object) which will be invoked for each of the elements as a projection operation before the actual predicate is invoked.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of equal requires F to meet the requirements of CopyConstructible. (deduced).

FwdIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of a forward iterator.

Proj

The type of an optional projection function. This defaults to util::projection_identity

T

The type of the new values to replace (deduced).

Returns:

The replace_if algorithm returns a hpx::future<FwdIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns void otherwise. It returns last.


Function template replace_copy

hpx::parallel::v1::replace_copy

Synopsis

// In header: <hpx/parallel/algorithms/replace.hpp>


template<typename ExPolicy, typename InIter, typename OutIter, typename T1, 
         typename T2, typename Proj = util::projection_identity, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< InIter >::value &&traits::is_projected< Proj, InIter >::value &&traits::is_indirect_callable< std::equal_to< T1 >,traits::projected< Proj, InIter >,traits::projected< Proj, T1 const * > >::value) > 
  unspecified replace_copy(ExPolicy && policy, InIter first, InIter last, 
                           OutIter dest, T1 const & old_value, 
                           T2 const & new_value, Proj && proj = Proj());

Description

Copies the all elements from the range [first, last) to another range beginning at dest replacing all elements satisfying a specific criteria with new_value.

Effects: Assigns to every iterator it in the range [result, result + (last - first)) either new_value or *(first + (it - result)) depending on whether the following corresponding condition holds: INVOKE(proj, *(first + (i - result))) == old_value

[Note]Note

Complexity: Performs exactly last - first applications of the predicate.

The assignments in the parallel replace_copy algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel replace_copy algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest

Refers to the beginning of the destination range.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

new_value

Refers to the new value to use as the replacement.

old_value

Refers to the old value of the elements to replace.

policy

The execution policy to use for the scheduling of the iterations.

proj

Specifies the function (or function object) which will be invoked for each of the elements as a projection operation before the actual predicate is invoked.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Proj

The type of an optional projection function. This defaults to util::projection_identity

T1

The type of the old value to replace (deduced).

T2

The type of the new values to replace (deduced).

Returns:

The replace_copy algorithm returns a hpx::future<tagged_pair<tag::in(InIter), tag::out(OutIter)> > if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns tagged_pair<tag::in(InIter), tag::out(OutIter)> otherwise. The copy algorithm returns the pair of the input iterator last and the output iterator to the element in the destination range, one past the last element copied.


Function template replace_copy_if

hpx::parallel::v1::replace_copy_if

Synopsis

// In header: <hpx/parallel/algorithms/replace.hpp>


template<typename ExPolicy, typename InIter, typename OutIter, typename F, 
         typename T, typename Proj = util::projection_identity, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< InIter >::value &&traits::is_projected< Proj, InIter >::value &&traits::is_indirect_callable< F, traits::projected< Proj, InIter > >::value) > 
  unspecified replace_copy_if(ExPolicy && policy, InIter first, InIter last, 
                              OutIter dest, F && f, T const & new_value, 
                              Proj && proj = Proj());

Description

Copies the all elements from the range [first, last) to another range beginning at dest replacing all elements satisfying a specific criteria with new_value.

Effects: Assigns to every iterator it in the range [result, result + (last - first)) either new_value or *(first + (it - result)) depending on whether the following corresponding condition holds: INVOKE(f, INVOKE(proj, *(first + (i - result)))) != false

[Note]Note

Complexity: Performs exactly last - first applications of the predicate.

The assignments in the parallel replace_copy_if algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel replace_copy_if algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest

Refers to the beginning of the destination range.

f

Specifies the function (or function object) which will be invoked for each of the elements in the sequence specified by [first, last).This is an unary predicate which returns true for the elements which need to replaced. The signature of this predicate should be equivalent to:

bool pred(const Type &a);


The signature does not need to have const&, but the function must not modify the objects passed to it. The type Type must be such that an object of type FwdIter can be dereferenced and then implicitly converted to Type.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

new_value

Refers to the new value to use as the replacement.

policy

The execution policy to use for the scheduling of the iterations.

proj

Specifies the function (or function object) which will be invoked for each of the elements as a projection operation before the actual predicate is invoked.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of equal requires F to meet the requirements of CopyConstructible. (deduced).

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Proj

The type of an optional projection function. This defaults to util::projection_identity

T

The type of the new values to replace (deduced).

Returns:

The replace_copy_if algorithm returns a hpx::future<tagged_pair<tag::in(InIter), tag::out(OutIter)> > if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns tagged_pair<tag::in(InIter), tag::out(OutIter)> otherwise. The replace_copy_if algorithm returns the input iterator last and the output iterator to the element in the destination range, one past the last element copied.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename Rng, typename T1, typename T2, 
               typename Proj = util::projection_identity, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_projected_range< Proj, Rng >::value &&traits::is_indirect_callable< std::equal_to< T1 >,traits::projected_range< Proj, Rng >,traits::projected< Proj, T1 const * > >::value) > 
        unspecified replace(ExPolicy &&, Rng &&, T1 const &, T2 const &, 
                            Proj && = Proj());
      template<typename ExPolicy, typename Rng, typename F, typename T, 
               typename Proj = util::projection_identity, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_projected_range< Proj, Rng >::value &&traits::is_indirect_callable< F, traits::projected_range< Proj, Rng > >::value) > 
        unspecified replace_if(ExPolicy &&, Rng &&, F &&, T const &, 
                               Proj && = Proj());
      template<typename ExPolicy, typename Rng, typename OutIter, typename T1, 
               typename T2, typename Proj = util::projection_identity, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_projected_range< Proj, Rng >::value &&traits::is_indirect_callable< std::equal_to< T1 >,traits::projected_range< Proj, Rng >,traits::projected< Proj, T1 const * > >::value) > 
        unspecified replace_copy(ExPolicy &&, Rng &&, OutIter, T1 const &, 
                                 T2 const &, Proj && = Proj());
      template<typename ExPolicy, typename Rng, typename OutIter, typename F, 
               typename T, typename Proj = util::projection_identity, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_projected_range< Proj, Rng >::value &&traits::is_indirect_callable< F, traits::projected_range< Proj, Rng > >::value) > 
        unspecified replace_copy_if(ExPolicy &&, Rng &&, OutIter, F &&, 
                                    T const &, Proj && = Proj());
    }
  }
}

Function template replace

hpx::parallel::v1::replace

Synopsis

// In header: <hpx/parallel/container_algorithms/replace.hpp>


template<typename ExPolicy, typename Rng, typename T1, typename T2, 
         typename Proj = util::projection_identity, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_projected_range< Proj, Rng >::value &&traits::is_indirect_callable< std::equal_to< T1 >,traits::projected_range< Proj, Rng >,traits::projected< Proj, T1 const * > >::value) > 
  unspecified replace(ExPolicy && policy, Rng && rng, T1 const & old_value, 
                      T2 const & new_value, Proj && proj = Proj());

Description

Replaces all elements satisfying specific criteria with new_value in the range [first, last).

[Note]Note

Complexity: Performs exactly last - first assignments.

Effects: Substitutes elements referred by the iterator it in the range [first,last) with new_value, when the following corresponding conditions hold: INVOKE(proj, *i) == old_value

The assignments in the parallel replace algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel replace algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

new_value

Refers to the new value to use as the replacement.

old_value

Refers to the old value of the elements to replace.

policy

The execution policy to use for the scheduling of the iterations.

proj

Specifies the function (or function object) which will be invoked for each of the elements as a projection operation before the actual predicate is invoked.

rng

Refers to the sequence of elements the algorithm will be applied to.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

Proj

The type of an optional projection function. This defaults to util::projection_identity

Rng

The type of the source range used (deduced). The iterators extracted from this range type must meet the requirements of a forward iterator.

T1

The type of the old value to replace (deduced).

T2

The type of the new values to replace (deduced).

Returns:

The replace algorithm returns a hpx::future<void> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns void otherwise.


Function template replace_if

hpx::parallel::v1::replace_if

Synopsis

// In header: <hpx/parallel/container_algorithms/replace.hpp>


template<typename ExPolicy, typename Rng, typename F, typename T, 
         typename Proj = util::projection_identity, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_projected_range< Proj, Rng >::value &&traits::is_indirect_callable< F, traits::projected_range< Proj, Rng > >::value) > 
  unspecified replace_if(ExPolicy && policy, Rng && rng, F && f, 
                         T const & new_value, Proj && proj = Proj());

Description

Replaces all elements satisfying specific criteria (for which predicate f returns true) with new_value in the range [first, last).

[Note]Note

Complexity: Performs exactly last - first applications of the predicate.

Effects: Substitutes elements referred by the iterator it in the range [first, last) with new_value, when the following corresponding conditions hold: INVOKE(f, INVOKE(proj, *it)) != false

The assignments in the parallel replace_if algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel replace_if algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

f

Specifies the function (or function object) which will be invoked for each of the elements in the sequence specified by [first, last).This is an unary predicate which returns true for the elements which need to replaced. The signature of this predicate should be equivalent to:

bool pred(const Type &a);


The signature does not need to have const&, but the function must not modify the objects passed to it. The type Type must be such that an object of type FwdIter can be dereferenced and then implicitly converted to Type.

new_value

Refers to the new value to use as the replacement.

policy

The execution policy to use for the scheduling of the iterations.

proj

Specifies the function (or function object) which will be invoked for each of the elements as a projection operation before the actual predicate is invoked.

rng

Refers to the sequence of elements the algorithm will be applied to.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of equal requires F to meet the requirements of CopyConstructible. (deduced).

Proj

The type of an optional projection function. This defaults to util::projection_identity

Rng

The type of the source range used (deduced). The iterators extracted from this range type must meet the requirements of a forward iterator.

T

The type of the new values to replace (deduced).

Returns:

The replace_if algorithm returns a hpx::future<FwdIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns void otherwise. It returns last.


Function template replace_copy

hpx::parallel::v1::replace_copy

Synopsis

// In header: <hpx/parallel/container_algorithms/replace.hpp>


template<typename ExPolicy, typename Rng, typename OutIter, typename T1, 
         typename T2, typename Proj = util::projection_identity, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_projected_range< Proj, Rng >::value &&traits::is_indirect_callable< std::equal_to< T1 >,traits::projected_range< Proj, Rng >,traits::projected< Proj, T1 const * > >::value) > 
  unspecified replace_copy(ExPolicy && policy, Rng && rng, OutIter dest, 
                           T1 const & old_value, T2 const & new_value, 
                           Proj && proj = Proj());

Description

Copies the all elements from the range [first, last) to another range beginning at dest replacing all elements satisfying a specific criteria with new_value.

Effects: Assigns to every iterator it in the range [result, result + (last - first)) either new_value or *(first + (it - result)) depending on whether the following corresponding condition holds: INVOKE(proj, *(first + (i - result))) == old_value

[Note]Note

Complexity: Performs exactly last - first applications of the predicate.

The assignments in the parallel replace_copy algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel replace_copy algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest

Refers to the beginning of the destination range.

new_value

Refers to the new value to use as the replacement.

old_value

Refers to the old value of the elements to replace.

policy

The execution policy to use for the scheduling of the iterations.

proj

Specifies the function (or function object) which will be invoked for each of the elements as a projection operation before the actual predicate is invoked.

rng

Refers to the sequence of elements the algorithm will be applied to.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Proj

The type of an optional projection function. This defaults to util::projection_identity

Rng

The type of the source range used (deduced). The iterators extracted from this range type must meet the requirements of an input iterator.

T1

The type of the old value to replace (deduced).

T2

The type of the new values to replace (deduced).

Returns:

The replace_copy algorithm returns a hpx::future<tagged_pair<tag::in(InIter), tag::out(OutIter)> > if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns tagged_pair<tag::in(InIter), tag::out(OutIter)> otherwise. The copy algorithm returns the pair of the input iterator last and the output iterator to the element in the destination range, one past the last element copied.


Function template replace_copy_if

hpx::parallel::v1::replace_copy_if

Synopsis

// In header: <hpx/parallel/container_algorithms/replace.hpp>


template<typename ExPolicy, typename Rng, typename OutIter, typename F, 
         typename T, typename Proj = util::projection_identity, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_projected_range< Proj, Rng >::value &&traits::is_indirect_callable< F, traits::projected_range< Proj, Rng > >::value) > 
  unspecified replace_copy_if(ExPolicy && policy, Rng && rng, OutIter dest, 
                              F && f, T const & new_value, 
                              Proj && proj = Proj());

Description

Copies the all elements from the range [first, last) to another range beginning at dest replacing all elements satisfying a specific criteria with new_value.

Effects: Assigns to every iterator it in the range [result, result + (last - first)) either new_value or *(first + (it - result)) depending on whether the following corresponding condition holds: INVOKE(f, INVOKE(proj, *(first + (i - result)))) != false

[Note]Note

Complexity: Performs exactly last - first applications of the predicate.

The assignments in the parallel replace_copy_if algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel replace_copy_if algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest

Refers to the beginning of the destination range.

f

Specifies the function (or function object) which will be invoked for each of the elements in the sequence specified by [first, last).This is an unary predicate which returns true for the elements which need to replaced. The signature of this predicate should be equivalent to:

bool pred(const Type &a);


The signature does not need to have const&, but the function must not modify the objects passed to it. The type Type must be such that an object of type FwdIter can be dereferenced and then implicitly converted to Type.

new_value

Refers to the new value to use as the replacement.

policy

The execution policy to use for the scheduling of the iterations.

proj

Specifies the function (or function object) which will be invoked for each of the elements as a projection operation before the actual predicate is invoked.

rng

Refers to the sequence of elements the algorithm will be applied to.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of equal requires F to meet the requirements of CopyConstructible. (deduced).

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Proj

The type of an optional projection function. This defaults to util::projection_identity

Rng

The type of the source range used (deduced). The iterators extracted from this range type must meet the requirements of an input iterator.

T

The type of the new values to replace (deduced).

Returns:

The replace_copy_if algorithm returns a hpx::future<tagged_pair<tag::in(InIter), tag::out(OutIter)> > if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns tagged_pair<tag::in(InIter), tag::out(OutIter)> otherwise. The replace_copy_if algorithm returns the input iterator last and the output iterator to the element in the destination range, one past the last element copied.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename BidirIter, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< BidirIter >::value) > 
        unspecified reverse(ExPolicy &&, BidirIter, BidirIter);
      template<typename ExPolicy, typename BidirIter, typename OutIter, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< BidirIter >::value &&traits::is_iterator< OutIter >::value) > 
        unspecified reverse_copy(ExPolicy &&, BidirIter, BidirIter, OutIter);
    }
  }
}

Function template reverse

hpx::parallel::v1::reverse

Synopsis

// In header: <hpx/parallel/algorithms/reverse.hpp>


template<typename ExPolicy, typename BidirIter, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< BidirIter >::value) > 
  unspecified reverse(ExPolicy && policy, BidirIter first, BidirIter last);

Description

Reverses the order of the elements in the range [first, last). Behaves as if applying std::iter_swap to every pair of iterators first+i, (last-i) - 1 for each non-negative i < (last-first)/2.

[Note]Note

Complexity: Linear in the distance between first and last.

The assignments in the parallel reverse algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel reverse algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

BidirIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an bidirectional iterator.

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

Returns:

The reverse algorithm returns a hpx::future<BidirIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns BidirIter otherwise. It returns last.


Function template reverse_copy

hpx::parallel::v1::reverse_copy

Synopsis

// In header: <hpx/parallel/algorithms/reverse.hpp>


template<typename ExPolicy, typename BidirIter, typename OutIter, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< BidirIter >::value &&traits::is_iterator< OutIter >::value) > 
  unspecified reverse_copy(ExPolicy && policy, BidirIter first, 
                           BidirIter last, OutIter dest_first);

Description

Copies the elements from the range [first, last) to another range beginning at dest_first in such a way that the elements in the new range are in reverse order. Behaves as if by executing the assignment *(dest_first + (last - first) - 1 - i) = *(first + i) once for each non-negative i < (last - first) If the source and destination ranges (that is, [first, last) and [dest_first, dest_first+(last-first)) respectively) overlap, the behavior is undefined.

[Note]Note

Complexity: Performs exactly last - first assignments.

The assignments in the parallel reverse_copy algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel reverse_copy algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest_first

Refers to the begin of the destination range.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

BidirIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an bidirectional iterator.

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

Returns:

The reverse_copy algorithm returns a hpx::future<tagged_pair<tag::in(BidirIter), tag::out(OutIter)> > if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns tagged_pair<tag::in(BidirIter), tag::out(OutIter)> otherwise. The copy algorithm returns the pair of the input iterator forwarded to the first element after the last in the input sequence and the output iterator to the element in the destination range, one past the last element copied.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename Rng, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value) > 
        unspecified reverse(ExPolicy &&, Rng &&);
      template<typename ExPolicy, typename Rng, typename OutIter, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_iterator< OutIter >::value) > 
        unspecified reverse_copy(ExPolicy &&, Rng &&, OutIter);
    }
  }
}

Function template reverse

hpx::parallel::v1::reverse

Synopsis

// In header: <hpx/parallel/container_algorithms/reverse.hpp>


template<typename ExPolicy, typename Rng, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value) > 
  unspecified reverse(ExPolicy && policy, Rng && rng);

Description

Reverses the order of the elements in the range [first, last). Behaves as if applying std::iter_swap to every pair of iterators first+i, (last-i) - 1 for each non-negative i < (last-first)/2.

[Note]Note

Complexity: Linear in the distance between first and last.

The assignments in the parallel reverse algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel reverse algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

policy

The execution policy to use for the scheduling of the iterations.

rng

Refers to the sequence of elements the algorithm will be applied to.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

Rng

The type of the source range used (deduced). The iterators extracted from this range type must meet the requirements of a bidirectional iterator.

Returns:

The reverse algorithm returns a hpx::future<BidirIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns BidirIter otherwise. It returns last.


Function template reverse_copy

hpx::parallel::v1::reverse_copy

Synopsis

// In header: <hpx/parallel/container_algorithms/reverse.hpp>


template<typename ExPolicy, typename Rng, typename OutIter, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_iterator< OutIter >::value) > 
  unspecified reverse_copy(ExPolicy && policy, Rng && rng, OutIter dest_first);

Description

Copies the elements from the range [first, last) to another range beginning at dest_first in such a way that the elements in the new range are in reverse order. Behaves as if by executing the assignment *(dest_first + (last - first) - 1 - i) = *(first + i) once for each non-negative i < (last - first) If the source and destination ranges (that is, [first, last) and [dest_first, dest_first+(last-first)) respectively) overlap, the behavior is undefined.

[Note]Note

Complexity: Performs exactly last - first assignments.

The assignments in the parallel reverse_copy algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel reverse_copy algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest_first

Refers to the begin of the destination range.

policy

The execution policy to use for the scheduling of the iterations.

rng

Refers to the sequence of elements the algorithm will be applied to.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

Rng

The type of the source range used (deduced). The iterators extracted from this range type must meet the requirements of a bidirectional iterator.

Returns:

The reverse_copy algorithm returns a hpx::future<tagged_pair<tag::in(BidirIter), tag::out(OutIter)> > if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns tagged_pair<tag::in(BidirIter), tag::out(OutIter)> otherwise. The copy algorithm returns the pair of the input iterator forwarded to the first element after the last in the input sequence and the output iterator to the element in the destination range, one past the last element copied.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename FwdIter, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< FwdIter >::value) > 
        unspecified rotate(ExPolicy &&, FwdIter, FwdIter, FwdIter);
      template<typename ExPolicy, typename FwdIter, typename OutIter, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< FwdIter >::value &&traits::is_iterator< OutIter >::value) > 
        unspecified rotate_copy(ExPolicy &&, FwdIter, FwdIter, FwdIter, 
                                OutIter);
    }
  }
}

Function template rotate

hpx::parallel::v1::rotate

Synopsis

// In header: <hpx/parallel/algorithms/rotate.hpp>


template<typename ExPolicy, typename FwdIter, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< FwdIter >::value) > 
  unspecified rotate(ExPolicy && policy, FwdIter first, FwdIter new_first, 
                     FwdIter last);

Description

Performs a left rotation on a range of elements. Specifically, rotate swaps the elements in the range [first, last) in such a way that the element new_first becomes the first element of the new range and new_first - 1 becomes the last element.

[Note]Note

Complexity: Linear in the distance between first and last.

The assignments in the parallel rotate algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel rotate algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

[Note]Note

The type of dereferenced FwdIter must meet the requirements of MoveAssignable and MoveConstructible.

Parameters:

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

new_first

Refers to the element that should appear at the beginning of the rotated range.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

FwdIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an forward iterator.

Returns:

The rotate algorithm returns a hpx::future<tagged_pair<tag::begin(FwdIter), tag::end(FwdIter)> > if the execution policy is of type parallel_task_execution_policy and returns tagged_pair<tag::begin(FwdIter), tag::end(FwdIter)> otherwise. The rotate algorithm returns the iterator equal to pair(first + (last - new_first), last).


Function template rotate_copy

hpx::parallel::v1::rotate_copy

Synopsis

// In header: <hpx/parallel/algorithms/rotate.hpp>


template<typename ExPolicy, typename FwdIter, typename OutIter, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< FwdIter >::value &&traits::is_iterator< OutIter >::value) > 
  unspecified rotate_copy(ExPolicy && policy, FwdIter first, 
                          FwdIter new_first, FwdIter last, 
                          OutIter dest_first);

Description

Copies the elements from the range [first, last), to another range beginning at dest_first in such a way, that the element new_first becomes the first element of the new range and new_first - 1 becomes the last element.

[Note]Note

Complexity: Performs exactly last - first assignments.

The assignments in the parallel rotate_copy algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel rotate_copy algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest_first

Refers to the begin of the destination range.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

new_first

Refers to the element that should appear at the beginning of the rotated range.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

FwdIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an bidirectional iterator.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Returns:

The rotate_copy algorithm returns a hpx::future<tagged_pair<tag::in(FwdIter), tag::out(OutIter)> > if the execution policy is of type parallel_task_execution_policy and returns tagged_pair<tag::in(FwdIter), tag::out(OutIter)> otherwise. The rotate_copy algorithm returns the output iterator to the element past the last element copied.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename Rng, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value) > 
        unspecified rotate(ExPolicy &&, Rng &&, 
                           typename traits::range_iterator< Rng >::type);
      template<typename ExPolicy, typename Rng, typename OutIter, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_iterator< OutIter >::value) > 
        unspecified rotate_copy(ExPolicy &&, Rng &&, 
                                typename traits::range_iterator< Rng >::type, 
                                OutIter);
    }
  }
}

Function template rotate

hpx::parallel::v1::rotate

Synopsis

// In header: <hpx/parallel/container_algorithms/rotate.hpp>


template<typename ExPolicy, typename Rng, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value) > 
  unspecified rotate(ExPolicy && policy, Rng && rng, 
                     typename traits::range_iterator< Rng >::type middle);

Description

Performs a left rotation on a range of elements. Specifically, rotate swaps the elements in the range [first, last) in such a way that the element new_first becomes the first element of the new range and new_first - 1 becomes the last element.

[Note]Note

Complexity: Linear in the distance between first and last.

The assignments in the parallel rotate algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel rotate algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

[Note]Note

The type of dereferenced FwdIter must meet the requirements of MoveAssignable and MoveConstructible.

Parameters:

middle

Refers to the element that should appear at the beginning of the rotated range.

policy

The execution policy to use for the scheduling of the iterations.

rng

Refers to the sequence of elements the algorithm will be applied to.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

Rng

The type of the source range used (deduced). The iterators extracted from this range type must meet the requirements of a forward iterator.

Returns:

The rotate algorithm returns a hpx::future<tagged_pair<tag::begin(FwdIter), tag::end(FwdIter)> > if the execution policy is of type parallel_task_execution_policy and returns tagged_pair<tag::begin(FwdIter), tag::end(FwdIter)> otherwise. The rotate algorithm returns the iterator equal to pair(first + (last - new_first), last).


Function template rotate_copy

hpx::parallel::v1::rotate_copy

Synopsis

// In header: <hpx/parallel/container_algorithms/rotate.hpp>


template<typename ExPolicy, typename Rng, typename OutIter, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_iterator< OutIter >::value) > 
  unspecified rotate_copy(ExPolicy && policy, Rng && rng, 
                          typename traits::range_iterator< Rng >::type middle, 
                          OutIter dest_first);

Description

Copies the elements from the range [first, last), to another range beginning at dest_first in such a way, that the element new_first becomes the first element of the new range and new_first - 1 becomes the last element.

[Note]Note

Complexity: Performs exactly last - first assignments.

The assignments in the parallel rotate_copy algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel rotate_copy algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest_first

Refers to the begin of the destination range.

middle

Refers to the element that should appear at the beginning of the rotated range.

policy

The execution policy to use for the scheduling of the iterations.

rng

Refers to the sequence of elements the algorithm will be applied to.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Rng

The type of the source range used (deduced). The iterators extracted from this range type must meet the requirements of a forward iterator.

Returns:

The rotate_copy algorithm returns a hpx::future<tagged_pair<tag::in(FwdIter), tag::out(OutIter)> > if the execution policy is of type parallel_task_execution_policy and returns tagged_pair<tag::in(FwdIter), tag::out(OutIter)> otherwise. The rotate_copy algorithm returns the output iterator to the element past the last element copied.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename FwdIter, typename FwdIter2> 
        unspecified search(ExPolicy &&, FwdIter, FwdIter, FwdIter2, FwdIter2);
      template<typename ExPolicy, typename FwdIter, typename FwdIter2, 
               typename Pred> 
        unspecified search(ExPolicy &&, FwdIter, FwdIter, FwdIter2, FwdIter2, 
                           Pred &&);
      template<typename ExPolicy, typename FwdIter, typename FwdIter2> 
        unspecified search_n(ExPolicy &&, FwdIter, std::size_t, FwdIter2, 
                             FwdIter2);
      template<typename ExPolicy, typename FwdIter, typename FwdIter2, 
               typename Pred> 
        unspecified search_n(ExPolicy &&, FwdIter, std::size_t, FwdIter2, 
                             FwdIter2, Pred &&);
    }
  }
}

Function template search

hpx::parallel::v1::search

Synopsis

// In header: <hpx/parallel/algorithms/search.hpp>


template<typename ExPolicy, typename FwdIter, typename FwdIter2> 
  unspecified search(ExPolicy && policy, FwdIter first, FwdIter last, 
                     FwdIter2 s_first, FwdIter2 s_last);

Description

Searches the range [first, last) for any elements in the range [s_first, s_last). Uses the operator== to compare elements.

[Note]Note

Complexity: at most (S*N) comparisons where S = distance(s_first, s_last) and N = distance(first, last).

The comparison operations in the parallel search algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparison operations in the parallel search algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

first

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

last

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

s_first

Refers to the beginning of the sequence of elements the algorithm will be searching for.

s_last

Refers to the end of the sequence of elements of the algorithm will be searching for.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

FwdIter

The type of the source iterators used for the first range (deduced). This iterator type must meet the requirements of a forward iterator.

FwdIter2

The type of the source iterators used for the second range (deduced). This iterator type must meet the requirements of a forward iterator.

Returns:

The search algorithm returns a hpx::future<FwdIter> if the execution policy is of type task_execution_policy and returns FwdIter otherwise. The search algorithm returns an iterator to the beginning of the first subsequence [s_first, s_last) in range [first, last). If the length of the subsequence [s_first, s_last) is greater than the length of the range [first, last), last is returned. Additionally if the size of the subsequence is empty or no subsequence is found, last is also returned.


Function template search

hpx::parallel::v1::search

Synopsis

// In header: <hpx/parallel/algorithms/search.hpp>


template<typename ExPolicy, typename FwdIter, typename FwdIter2, 
         typename Pred> 
  unspecified search(ExPolicy && policy, FwdIter first, FwdIter last, 
                     FwdIter2 s_first, FwdIter2 s_last, Pred && op);

Description

Searches the range [first, last) for any elements in the range [s_first, s_last). Uses a provided predicate to compare elements.

[Note]Note

Complexity: at most (S*N) comparisons where S = distance(s_first, s_last) and N = distance(first, last).

The comparison operations in the parallel search algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparison operations in the parallel search algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

first

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

last

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

op

Refers to the binary predicate which returns true if the elements should be treated as equal. the signature of the function should be equivalent to

bool pred(const Type1 &a, const Type2 &b);


The signature does not need to have const &, but the function must not modify the objects passed to it. The types Type1 and Type2 must be such that objects of types FwdIter1 and FwdIter2 can be dereferenced and then implicitly converted to Type1 and Type2 respectively

policy

The execution policy to use for the scheduling of the iterations.

s_first

Refers to the beginning of the sequence of elements the algorithm will be searching for.

s_last

Refers to the end of the sequence of elements of the algorithm will be searching for.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

FwdIter

The type of the source iterators used for the first range (deduced). This iterator type must meet the requirements of an input iterator.

FwdIter2

The type of the source iterators used for the second range (deduced). This iterator type must meet the requirements of an forward iterator.

Returns:

The search algorithm returns a hpx::future<FwdIter> if the execution policy is of type task_execution_policy and returns FwdIter otherwise. The search algorithm returns an iterator to the beginning of the first subsequence [s_first, s_last) in range [first, last). If the length of the subsequence [s_first, s_last) is greater than the length of the range [first, last), last is returned. Additionally if the size of the subsequence is empty first is returned. If no subsequence is found, last is returned.


Function template search_n

hpx::parallel::v1::search_n

Synopsis

// In header: <hpx/parallel/algorithms/search.hpp>


template<typename ExPolicy, typename FwdIter, typename FwdIter2> 
  unspecified search_n(ExPolicy && policy, FwdIter first, std::size_t count, 
                       FwdIter2 s_first, FwdIter2 s_last);

Description

Searches the range [first, first+count) for any elements in the range [s_first, s_last). Uses the operator== to compare elements.

[Note]Note

Complexity: at most (S*N) comparisons where S = distance(s_first, s_last) and N = count.

The comparison operations in the parallel search_n algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparison operations in the parallel search_n algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

count

Refers to the range of elements of the first range the algorithm will be applied to.

first

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

s_first

Refers to the beginning of the sequence of elements the algorithm will be searching for.

s_last

Refers to the end of the sequence of elements of the algorithm will be searching for.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

FwdIter

The type of the source iterators used for the first range (deduced). This iterator type must meet the requirements of an input iterator.

FwdIter2

The type of the source iterators used for the second range (deduced). This iterator type must meet the requirements of an forward iterator.

Returns:

The search_n algorithm returns a hpx::future<FwdIter> if the execution policy is of type task_execution_policy and returns FwdIter otherwise. The search_n algorithm returns an iterator to the beginning of the first subsequence [s_first, s_last) in range [first, first+count). If the length of the subsequence [s_first, s_last) is greater than the length of the range [first, first+count), first is returned. Additionally if the size of the subsequence is empty or no subsequence is found, first is also returned.


Function template search_n

hpx::parallel::v1::search_n

Synopsis

// In header: <hpx/parallel/algorithms/search.hpp>


template<typename ExPolicy, typename FwdIter, typename FwdIter2, 
         typename Pred> 
  unspecified search_n(ExPolicy && policy, FwdIter first, std::size_t count, 
                       FwdIter2 s_first, FwdIter2 s_last, Pred && op);

Description

Searches the range [first, last) for any elements in the range [s_first, s_last). Uses a provided predicate to compare elements.

[Note]Note

Complexity: at most (S*N) comparisons where S = distance(s_first, s_last) and N = count.

The comparison operations in the parallel search_n algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The comparison operations in the parallel search_n algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

count

Refers to the range of elements of the first range the algorithm will be applied to.

first

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

op

Refers to the binary predicate which returns true if the elements should be treated as equal. the signature of the function should be equivalent to

bool pred(const Type1 &a, const Type2 &b);


The signature does not need to have const &, but the function must not modify the objects passed to it. The types Type1 and Type2 must be such that objects of types FwdIter1 and FwdIter2 can be dereferenced and then implicitly converted to Type1 and Type2 respectively

policy

The execution policy to use for the scheduling of the iterations.

s_first

Refers to the beginning of the sequence of elements the algorithm will be searching for.

s_last

Refers to the end of the sequence of elements of the algorithm will be searching for.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

FwdIter

The type of the source iterators used for the first range (deduced). This iterator type must meet the requirements of an input iterator.

FwdIter2

The type of the source iterators used for the second range (deduced). This iterator type must meet the requirements of an forward iterator.

Returns:

The search_n algorithm returns a hpx::future<FwdIter> if the execution policy is of type task_execution_policy and returns FwdIter otherwise. The search_n algorithm returns an iterator to the beginning of the last subsequence [s_first, s_last) in range [first, first+count). If the length of the subsequence [s_first, s_last) is greater than the length of the range [first, first+count), first is returned. Additionally if the size of the subsequence is empty or no subsequence is found, first is also returned.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter1, typename InIter2, 
               typename OutIter, typename F> 
        unspecified set_difference(ExPolicy &&, InIter1, InIter1, InIter2, 
                                   InIter2, OutIter, F &&);
      template<typename ExPolicy, typename InIter1, typename InIter2, 
               typename OutIter> 
        unspecified set_difference(ExPolicy &&, InIter1, InIter1, InIter2, 
                                   InIter2, OutIter);
    }
  }
}

Function template set_difference

hpx::parallel::v1::set_difference

Synopsis

// In header: <hpx/parallel/algorithms/set_difference.hpp>


template<typename ExPolicy, typename InIter1, typename InIter2, 
         typename OutIter, typename F> 
  unspecified set_difference(ExPolicy && policy, InIter1 first1, 
                             InIter1 last1, InIter2 first2, InIter2 last2, 
                             OutIter dest, F && f);

Description

Constructs a sorted range beginning at dest consisting of all elements present in the range [first1, last1) and not present in the range [first2, last2). This algorithm expects both input ranges to be sorted with the given binary predicate f.

[Note]Note

Complexity: At most 2*(N1 + N2 - 1) comparisons, where N1 is the length of the first sequence and N2 is the length of the second sequence.

Equivalent elements are treated individually, that is, if some element is found m times in [first1, last1) and n times in [first2, last2), it will be copied to dest exactly std::max(m-n, 0) times. The resulting range cannot overlap with either of the input ranges.

The resulting range cannot overlap with either of the input ranges.

The application of function objects in parallel algorithm invoked with a sequential execution policy object execute in sequential order in the calling thread (sequential_execution_policy) or in a single new thread spawned from the current thread (for sequential_task_execution_policy).

The application of function objects in parallel algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest

Refers to the beginning of the destination range.

f

The binary predicate which returns true if the elements should be treated as equal. The signature of the predicate function should be equivalent to the following:

bool pred(const Type1 &a, const Type1 &b);


The signature does not need to have const &, but the function must not modify the objects passed to it. The type Type1 must be such that objects of type InIter can be dereferenced and then implicitly converted to Type1

first1

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

first2

Refers to the beginning of the sequence of elements of the second range the algorithm will be applied to.

last1

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

last2

Refers to the end of the sequence of elements of the second range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it applies user-provided function objects.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of set_difference requires F to meet the requirements of CopyConstructible.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Returns:

The set_difference algorithm returns a hpx::future<OutIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns OutIter otherwise. The set_difference algorithm returns the output iterator to the element in the destination range, one past the last element copied.


Function template set_difference

hpx::parallel::v1::set_difference

Synopsis

// In header: <hpx/parallel/algorithms/set_difference.hpp>


template<typename ExPolicy, typename InIter1, typename InIter2, 
         typename OutIter> 
  unspecified set_difference(ExPolicy && policy, InIter1 first1, 
                             InIter1 last1, InIter2 first2, InIter2 last2, 
                             OutIter dest);

Description

Constructs a sorted range beginning at dest consisting of all elements present in the range [first1, last1) and not present in the range [first2, last2). This algorithm expects both input ranges to be sorted with operator<

[Note]Note

Complexity: At most 2*(N1 + N2 - 1) comparisons, where N1 is the length of the first sequence and N2 is the length of the second sequence.

Equivalent elements are treated individually, that is, if some element is found m times in [first1, last1) and n times in [first2, last2), it will be copied to dest exactly std::max(m-n, 0) times. The resulting range cannot overlap with either of the input ranges.

The resulting range cannot overlap with either of the input ranges.

The application of function objects in parallel algorithm invoked with a sequential execution policy object execute in sequential order in the calling thread (sequential_execution_policy) or in a single new thread spawned from the current thread (for sequential_task_execution_policy).

The application of function objects in parallel algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest

Refers to the beginning of the destination range.

first1

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

first2

Refers to the beginning of the sequence of elements of the second range the algorithm will be applied to.

last1

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

last2

Refers to the end of the sequence of elements of the second range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it applies user-provided function objects.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Returns:

The set_difference algorithm returns a hpx::future<OutIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns OutIter otherwise. The set_difference algorithm returns the output iterator to the element in the destination range, one past the last element copied.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter1, typename InIter2, 
               typename OutIter, typename F> 
        unspecified set_intersection(ExPolicy &&, InIter1, InIter1, InIter2, 
                                     InIter2, OutIter, F &&);
      template<typename ExPolicy, typename InIter1, typename InIter2, 
               typename OutIter> 
        unspecified set_intersection(ExPolicy &&, InIter1, InIter1, InIter2, 
                                     InIter2, OutIter);
    }
  }
}

Function template set_intersection

hpx::parallel::v1::set_intersection

Synopsis

// In header: <hpx/parallel/algorithms/set_intersection.hpp>


template<typename ExPolicy, typename InIter1, typename InIter2, 
         typename OutIter, typename F> 
  unspecified set_intersection(ExPolicy && policy, InIter1 first1, 
                               InIter1 last1, InIter2 first2, InIter2 last2, 
                               OutIter dest, F && f);

Description

Constructs a sorted range beginning at dest consisting of all elements present in both sorted ranges [first1, last1) and [first2, last2). This algorithm expects both input ranges to be sorted with the given binary predicate f.

[Note]Note

Complexity: At most 2*(N1 + N2 - 1) comparisons, where N1 is the length of the first sequence and N2 is the length of the second sequence.

If some element is found m times in [first1, last1) and n times in [first2, last2), the first std::min(m, n) elements will be copied from the first range to the destination range. The order of equivalent elements is preserved. The resulting range cannot overlap with either of the input ranges.

The resulting range cannot overlap with either of the input ranges.

The application of function objects in parallel algorithm invoked with a sequential execution policy object execute in sequential order in the calling thread (sequential_execution_policy) or in a single new thread spawned from the current thread (for sequential_task_execution_policy).

The application of function objects in parallel algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest

Refers to the beginning of the destination range.

f

The binary predicate which returns true if the elements should be treated as equal. The signature of the predicate function should be equivalent to the following:

bool pred(const Type1 &a, const Type1 &b);


The signature does not need to have const &, but the function must not modify the objects passed to it. The type Type1 must be such that objects of type InIter can be dereferenced and then implicitly converted to Type1

first1

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

first2

Refers to the beginning of the sequence of elements of the second range the algorithm will be applied to.

last1

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

last2

Refers to the end of the sequence of elements of the second range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it applies user-provided function objects.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of set_intersection requires F to meet the requirements of CopyConstructible.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Returns:

The set_intersection algorithm returns a hpx::future<OutIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns OutIter otherwise. The set_intersection algorithm returns the output iterator to the element in the destination range, one past the last element copied.


Function template set_intersection

hpx::parallel::v1::set_intersection

Synopsis

// In header: <hpx/parallel/algorithms/set_intersection.hpp>


template<typename ExPolicy, typename InIter1, typename InIter2, 
         typename OutIter> 
  unspecified set_intersection(ExPolicy && policy, InIter1 first1, 
                               InIter1 last1, InIter2 first2, InIter2 last2, 
                               OutIter dest);

Description

Constructs a sorted range beginning at dest consisting of all elements present in both sorted ranges [first1, last1) and [first2, last2). This algorithm expects both input ranges to be sorted with operator<

[Note]Note

Complexity: At most 2*(N1 + N2 - 1) comparisons, where N1 is the length of the first sequence and N2 is the length of the second sequence.

If some element is found m times in [first1, last1) and n times in [first2, last2), the first std::min(m, n) elements will be copied from the first range to the destination range. The order of equivalent elements is preserved. The resulting range cannot overlap with either of the input ranges.

The resulting range cannot overlap with either of the input ranges.

The application of function objects in parallel algorithm invoked with a sequential execution policy object execute in sequential order in the calling thread (sequential_execution_policy) or in a single new thread spawned from the current thread (for sequential_task_execution_policy).

The application of function objects in parallel algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest

Refers to the beginning of the destination range.

first1

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

first2

Refers to the beginning of the sequence of elements of the second range the algorithm will be applied to.

last1

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

last2

Refers to the end of the sequence of elements of the second range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it applies user-provided function objects.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Returns:

The set_intersection algorithm returns a hpx::future<OutIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns OutIter otherwise. The set_intersection algorithm returns the output iterator to the element in the destination range, one past the last element copied.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter1, typename InIter2, 
               typename OutIter, typename F> 
        unspecified set_symmetric_difference(ExPolicy &&, InIter1, InIter1, 
                                             InIter2, InIter2, OutIter, F &&);
      template<typename ExPolicy, typename InIter1, typename InIter2, 
               typename OutIter> 
        unspecified set_symmetric_difference(ExPolicy &&, InIter1, InIter1, 
                                             InIter2, InIter2, OutIter);
    }
  }
}

Function template set_symmetric_difference

hpx::parallel::v1::set_symmetric_difference

Synopsis

// In header: <hpx/parallel/algorithms/set_symmetric_difference.hpp>


template<typename ExPolicy, typename InIter1, typename InIter2, 
         typename OutIter, typename F> 
  unspecified set_symmetric_difference(ExPolicy && policy, InIter1 first1, 
                                       InIter1 last1, InIter2 first2, 
                                       InIter2 last2, OutIter dest, F && f);

Description

Constructs a sorted range beginning at dest consisting of all elements present in either of the sorted ranges [first1, last1) and [first2, last2), but not in both of them are copied to the range beginning at dest. The resulting range is also sorted. This algorithm expects both input ranges to be sorted with the given binary predicate f.

[Note]Note

Complexity: At most 2*(N1 + N2 - 1) comparisons, where N1 is the length of the first sequence and N2 is the length of the second sequence.

If some element is found m times in [first1, last1) and n times in [first2, last2), it will be copied to dest exactly std::abs(m-n) times. If m>n, then the last m-n of those elements are copied from [first1,last1), otherwise the last n-m elements are copied from [first2,last2). The resulting range cannot overlap with either of the input ranges.

The resulting range cannot overlap with either of the input ranges.

The application of function objects in parallel algorithm invoked with a sequential execution policy object execute in sequential order in the calling thread (sequential_execution_policy) or in a single new thread spawned from the current thread (for sequential_task_execution_policy).

The application of function objects in parallel algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest

Refers to the beginning of the destination range.

f

The binary predicate which returns true if the elements should be treated as equal. The signature of the predicate function should be equivalent to the following:

bool pred(const Type1 &a, const Type1 &b);


The signature does not need to have const &, but the function must not modify the objects passed to it. The type Type1 must be such that objects of type InIter can be dereferenced and then implicitly converted to Type1

first1

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

first2

Refers to the beginning of the sequence of elements of the second range the algorithm will be applied to.

last1

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

last2

Refers to the end of the sequence of elements of the second range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it applies user-provided function objects.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of set_symmetric_difference requires F to meet the requirements of CopyConstructible.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Returns:

The set_symmetric_difference algorithm returns a hpx::future<OutIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns OutIter otherwise. The set_symmetric_difference algorithm returns the output iterator to the element in the destination range, one past the last element copied.


Function template set_symmetric_difference

hpx::parallel::v1::set_symmetric_difference

Synopsis

// In header: <hpx/parallel/algorithms/set_symmetric_difference.hpp>


template<typename ExPolicy, typename InIter1, typename InIter2, 
         typename OutIter> 
  unspecified set_symmetric_difference(ExPolicy && policy, InIter1 first1, 
                                       InIter1 last1, InIter2 first2, 
                                       InIter2 last2, OutIter dest);

Description

Constructs a sorted range beginning at dest consisting of all elements present in either of the sorted ranges [first1, last1) and [first2, last2), but not in both of them are copied to the range beginning at dest. This algorithm expects both input ranges to be sorted with operator<.

[Note]Note

Complexity: At most 2*(N1 + N2 - 1) comparisons, where N1 is the length of the first sequence and N2 is the length of the second sequence.

If some element is found m times in [first1, last1) and n times in [first2, last2), it will be copied to dest exactly std::abs(m-n) times. If m>n, then the last m-n of those elements are copied from [first1,last1), otherwise the last n-m elements are copied from [first2,last2). The resulting range cannot overlap with either of the input ranges.

The resulting range cannot overlap with either of the input ranges.

The application of function objects in parallel algorithm invoked with a sequential execution policy object execute in sequential order in the calling thread (sequential_execution_policy) or in a single new thread spawned from the current thread (for sequential_task_execution_policy).

The application of function objects in parallel algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest

Refers to the beginning of the destination range.

first1

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

first2

Refers to the beginning of the sequence of elements of the second range the algorithm will be applied to.

last1

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

last2

Refers to the end of the sequence of elements of the second range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it applies user-provided function objects.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Returns:

The set_symmetric_difference algorithm returns a hpx::future<OutIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns OutIter otherwise. The set_symmetric_difference algorithm returns the output iterator to the element in the destination range, one past the last element copied.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter1, typename InIter2, 
               typename OutIter, typename F> 
        unspecified set_union(ExPolicy &&, InIter1, InIter1, InIter2, InIter2, 
                              OutIter, F &&);
      template<typename ExPolicy, typename InIter1, typename InIter2, 
               typename OutIter> 
        unspecified set_union(ExPolicy &&, InIter1, InIter1, InIter2, InIter2, 
                              OutIter);
    }
  }
}

Function template set_union

hpx::parallel::v1::set_union

Synopsis

// In header: <hpx/parallel/algorithms/set_union.hpp>


template<typename ExPolicy, typename InIter1, typename InIter2, 
         typename OutIter, typename F> 
  unspecified set_union(ExPolicy && policy, InIter1 first1, InIter1 last1, 
                        InIter2 first2, InIter2 last2, OutIter dest, F && f);

Description

Constructs a sorted range beginning at dest consisting of all elements present in one or both sorted ranges [first1, last1) and [first2, last2). This algorithm expects both input ranges to be sorted with the given binary predicate f.

[Note]Note

Complexity: At most 2*(N1 + N2 - 1) comparisons, where N1 is the length of the first sequence and N2 is the length of the second sequence.

If some element is found m times in [first1, last1) and n times in [first2, last2), then all m elements will be copied from [first1, last1) to dest, preserving order, and then exactly std::max(n-m, 0) elements will be copied from [first2, last2) to dest, also preserving order.

The resulting range cannot overlap with either of the input ranges.

The application of function objects in parallel algorithm invoked with a sequential execution policy object execute in sequential order in the calling thread (sequential_execution_policy) or in a single new thread spawned from the current thread (for sequential_task_execution_policy).

The application of function objects in parallel algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest

Refers to the beginning of the destination range.

f

The binary predicate which returns true if the elements should be treated as equal. The signature of the predicate function should be equivalent to the following:

bool pred(const Type1 &a, const Type1 &b);


The signature does not need to have const &, but the function must not modify the objects passed to it. The type Type1 must be such that objects of type InIter can be dereferenced and then implicitly converted to Type1

first1

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

first2

Refers to the beginning of the sequence of elements of the second range the algorithm will be applied to.

last1

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

last2

Refers to the end of the sequence of elements of the second range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it applies user-provided function objects.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of set_union requires F to meet the requirements of CopyConstructible.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Returns:

The set_union algorithm returns a hpx::future<OutIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns OutIter otherwise. The set_union algorithm returns the output iterator to the element in the destination range, one past the last element copied.


Function template set_union

hpx::parallel::v1::set_union

Synopsis

// In header: <hpx/parallel/algorithms/set_union.hpp>


template<typename ExPolicy, typename InIter1, typename InIter2, 
         typename OutIter> 
  unspecified set_union(ExPolicy && policy, InIter1 first1, InIter1 last1, 
                        InIter2 first2, InIter2 last2, OutIter dest);

Description

Constructs a sorted range beginning at dest consisting of all elements present in one or both sorted ranges [first1, last1) and [first2, last2). This algorithm expects both input ranges to be sorted with operator<

[Note]Note

Complexity: At most 2*(N1 + N2 - 1) comparisons, where N1 is the length of the first sequence and N2 is the length of the second sequence.

If some element is found m times in [first1, last1) and n times in [first2, last2), then all m elements will be copied from [first1, last1) to dest, preserving order, and then exactly std::max(n-m, 0) elements will be copied from [first2, last2) to dest, also preserving order.

The resulting range cannot overlap with either of the input ranges.

The application of function objects in parallel algorithm invoked with a sequential execution policy object execute in sequential order in the calling thread (sequential_execution_policy) or in a single new thread spawned from the current thread (for sequential_task_execution_policy).

The application of function objects in parallel algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest

Refers to the beginning of the destination range.

first1

Refers to the beginning of the sequence of elements of the first range the algorithm will be applied to.

first2

Refers to the beginning of the sequence of elements of the second range the algorithm will be applied to.

last1

Refers to the end of the sequence of elements of the first range the algorithm will be applied to.

last2

Refers to the end of the sequence of elements of the second range the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it applies user-provided function objects.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Returns:

The set_union algorithm returns a hpx::future<OutIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns OutIter otherwise. The set_union algorithm returns the output iterator to the element in the destination range, one past the last element copied.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename Rng, 
               typename Proj = util::projection_identity, 
               typename Compare = std::less<            typename std::remove_reference<                typename traits::projected_range_result_of<Proj, Rng>::type            >::type        >, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_projected_range< Proj, Rng >::value &&traits::is_indirect_callable< Compare,traits::projected_range< Proj, Rng >,traits::projected_range< Proj, Rng > >::value) > 
        unspecified sort(ExPolicy &&, Rng &&, Compare && = Compare(), 
                         Proj && = Proj());
    }
  }
}

Function template sort

hpx::parallel::v1::sort

Synopsis

// In header: <hpx/parallel/container_algorithms/sort.hpp>


template<typename ExPolicy, typename Rng, 
         typename Proj = util::projection_identity, 
         typename Compare = std::less<            typename std::remove_reference<                typename traits::projected_range_result_of<Proj, Rng>::type            >::type        >, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_projected_range< Proj, Rng >::value &&traits::is_indirect_callable< Compare,traits::projected_range< Proj, Rng >,traits::projected_range< Proj, Rng > >::value) > 
  unspecified sort(ExPolicy && policy, Rng && rng, 
                   Compare && comp = Compare(), Proj && proj = Proj());

Description

Sorts the elements in the range rng in ascending order. The order of equal elements is not guaranteed to be preserved. The function uses the given comparison function object comp (defaults to using operator<()).

[Note]Note

Complexity: O(Nlog(N)), where N = std::distance(begin(rng), end(rng)) comparisons.

A sequence is sorted with respect to a comparator comp and a projection proj if for every iterator i pointing to the sequence and every non-negative integer n such that i + n is a valid iterator pointing to an element of the sequence, and INVOKE(comp, INVOKE(proj, *(i + n)), INVOKE(proj, *i)) == false.

comp has to induce a strict weak ordering on the values.

The application of function objects in parallel algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The application of function objects in parallel algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

comp

comp is a callable object. The return value of the INVOKE operation applied to an object of type Comp, when contextually converted to bool, yields true if the first argument of the call is less than the second, and false otherwise. It is assumed that comp will not apply any non-constant function through the dereferenced iterator.

policy

The execution policy to use for the scheduling of the iterations.

proj

Specifies the function (or function object) which will be invoked for each pair of elements as a projection operation before the actual predicate comp is invoked.

rng

Refers to the sequence of elements the algorithm will be applied to.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it applies user-provided function objects.

Proj

The type of an optional projection function. This defaults to util::projection_identity

Rng

The type of the source range used (deduced). The iterators extracted from this range type must meet the requirements of an input iterator.

Returns:

The sort algorithm returns a hpx::future<Iter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns Iter otherwise. It returns last.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename KeyIter, typename ValueIter, 
               typename Compare = std::less<            typename std::iterator_traits<KeyIter>::value_type        > > 
        unspecified sort_by_key(ExPolicy &&, KeyIter, KeyIter, ValueIter, 
                                Compare && = Compare());
    }
  }
}

Function template sort_by_key

hpx::parallel::v1::sort_by_key

Synopsis

// In header: <hpx/parallel/algorithms/sort_by_key.hpp>


template<typename ExPolicy, typename KeyIter, typename ValueIter, 
         typename Compare = std::less<            typename std::iterator_traits<KeyIter>::value_type        > > 
  unspecified sort_by_key(ExPolicy && policy, KeyIter key_first, 
                          KeyIter key_last, ValueIter value_first, 
                          Compare && comp = Compare());

Description

Sorts one range of data using keys supplied in another range. The key elements in the range [key_first, key_last) are sorted in ascending order with the corresponding elements in the value range moved to follow the sorted order. The algorithm is not stable, the order of equal elements is not guaranteed to be preserved. The function uses the given comparison function object comp (defaults to using operator<()).

[Note]Note

Complexity: O(Nlog(N)), where N = std::distance(first, last) comparisons.

A sequence is sorted with respect to a comparator comp and a projection proj if for every iterator i pointing to the sequence and every non-negative integer n such that i + n is a valid iterator pointing to an element of the sequence, and INVOKE(comp, INVOKE(proj, *(i + n)), INVOKE(proj, *i)) == false.

comp has to induce a strict weak ordering on the values.

The application of function objects in parallel algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The application of function objects in parallel algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

comp

comp is a callable object. The return value of the INVOKE operation applied to an object of type Comp, when contextually converted to bool, yields true if the first argument of the call is less than the second, and false otherwise. It is assumed that comp will not apply any non-constant function through the dereferenced iterator.

key_first

Refers to the beginning of the sequence of key elements the algorithm will be applied to.

key_last

Refers to the end of the sequence of key elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

value_first

Refers to the beginning of the sequence of value elements the algorithm will be applied to, the range of elements must match [key_first, key_last)

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it applies user-provided function objects.

KeyIter

The type of the key iterators used (deduced). This iterator type must meet the requirements of a random access iterator.

ValueIter

The type of the value iterators used (deduced). This iterator type must meet the requirements of a random access iterator.

Returns:

The sort_by-key algorithm returns a hpx::future<tagged_pair<tag::in1(KeyIter>, tag::in2(ValueIter)> > if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns tagged_pair<tag::in1(KeyIter), tag::in2(ValueIter)> otherwise. The algorithm returns a pair holding an iterator pointing to the first element after the last element in the input key sequence and an iterator pointing to the first element after the last element in the input value sequence.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename ForwardIter1, 
               typename ForwardIter2> 
        unspecified swap_ranges(ExPolicy &&, ForwardIter1, ForwardIter1, 
                                ForwardIter2);
    }
  }
}

Function template swap_ranges

hpx::parallel::v1::swap_ranges

Synopsis

// In header: <hpx/parallel/algorithms/swap_ranges.hpp>


template<typename ExPolicy, typename ForwardIter1, typename ForwardIter2> 
  unspecified swap_ranges(ExPolicy && policy, ForwardIter1 first1, 
                          ForwardIter1 last1, ForwardIter2 first2);

Description

Exchanges elements between range [first1, last1) and another range starting at first2.

[Note]Note

Complexity: Linear in the distance between first1 and last1

The swap operations in the parallel swap_ranges algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The swap operations in the parallel swap_ranges algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

first1

Refers to the beginning of the first sequence of elements the algorithm will be applied to.

first2

Refers to the beginning of the second sequence of elements the algorithm will be applied to.

last1

Refers to the end of the first sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the swap operations.

ForwardIter1

The type of the first range of iterators to swap (deduced). This iterator type must meet the requirements of an forward iterator.

ForwardIter2

The type of the second range of iterators to swap (deduced). This iterator type must meet the requirements of an forward iterator.

Returns:

The swap_ranges algorithm returns a hpx::future<ForwardIter2> if the execution policy is of type parallel_task_execution_policy and returns ForwardIter2 otherwise. The swap_ranges algorithm returns iterator to the element past the last element exchanged in the range beginning with first2.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter, typename OutIter, 
               typename F, typename Proj = util::projection_identity, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< InIter >::value &&traits::is_iterator< OutIter >::value &&traits::is_projected< Proj, InIter >::value &&traits::is_indirect_callable< F, traits::projected< Proj, InIter > >::value) > 
        unspecified transform(ExPolicy &&, InIter, InIter, OutIter, F &&, 
                              Proj && = Proj());
      template<typename ExPolicy, typename InIter1, typename InIter2, 
               typename OutIter, typename F, 
               typename Proj1 = util::projection_identity, 
               typename Proj2 = util::projection_identity, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< InIter1 >::value &&traits::is_iterator< InIter2 >::value &&traits::is_iterator< OutIter >::value &&traits::is_projected< Proj1, InIter1 >::value &&traits::is_projected< Proj2, InIter2 >::value &&traits::is_indirect_callable< F, traits::projected< Proj1, InIter1 >,traits::projected< Proj2, InIter2 > >::value) > 
        unspecified transform(ExPolicy &&, InIter1, InIter1, InIter2, OutIter, 
                              F &&);
      template<typename ExPolicy, typename InIter1, typename InIter2, 
               typename OutIter, typename F, 
               typename Proj1 = util::projection_identity, 
               typename Proj2 = util::projection_identity, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< InIter1 >::value &&traits::is_iterator< InIter2 >::value &&traits::is_iterator< OutIter >::value &&traits::is_projected< Proj1, InIter1 >::value &&traits::is_projected< Proj2, InIter2 >::value &&traits::is_indirect_callable< F, traits::projected< Proj1, InIter1 >,traits::projected< Proj2, InIter2 > >::value) > 
        unspecified transform(ExPolicy &&, InIter1, InIter1, InIter2, InIter2, 
                              OutIter, F &&);
    }
  }
}

Function template transform

hpx::parallel::v1::transform

Synopsis

// In header: <hpx/parallel/algorithms/transform.hpp>


template<typename ExPolicy, typename InIter, typename OutIter, typename F, 
         typename Proj = util::projection_identity, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< InIter >::value &&traits::is_iterator< OutIter >::value &&traits::is_projected< Proj, InIter >::value &&traits::is_indirect_callable< F, traits::projected< Proj, InIter > >::value) > 
  unspecified transform(ExPolicy && policy, InIter first, InIter last, 
                        OutIter dest, F && f, Proj && proj = Proj());

Description

Applies the given function f to the range [first, last) and stores the result in another range, beginning at dest.

[Note]Note

Complexity: Exactly last - first applications of f

The invocations of f in the parallel transform algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The invocations of f in the parallel transform algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest

Refers to the beginning of the destination range.

f

Specifies the function (or function object) which will be invoked for each of the elements in the sequence specified by [first, last).This is an unary predicate. The signature of this predicate should be equivalent to:

Ret fun(const Type &a);


The signature does not need to have const&. The type Type must be such that an object of type InIter can be dereferenced and then implicitly converted to Type. The type Ret must be such that an object of type OutIter can be dereferenced and assigned a value of type Ret.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

proj

Specifies the function (or function object) which will be invoked for each of the elements as a projection operation before the actual predicate f is invoked.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the invocations of f.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of transform requires F to meet the requirements of CopyConstructible.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Proj

The type of an optional projection function. This defaults to util::projection_identity

Returns:

The transform algorithm returns a hpx::future<tagged_pair<tag::in(FwdIter), tag::out(OutIter)> > if the execution policy is of type parallel_task_execution_policy and returns tagged_pair<tag::in(FwdIter), tag::out(OutIter)> otherwise. The transform algorithm returns a tuple holding an iterator referring to the first element after the input sequence and the output iterator to the element in the destination range, one past the last element copied.


Function template transform

hpx::parallel::v1::transform

Synopsis

// In header: <hpx/parallel/algorithms/transform.hpp>


template<typename ExPolicy, typename InIter1, typename InIter2, 
         typename OutIter, typename F, 
         typename Proj1 = util::projection_identity, 
         typename Proj2 = util::projection_identity, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< InIter1 >::value &&traits::is_iterator< InIter2 >::value &&traits::is_iterator< OutIter >::value &&traits::is_projected< Proj1, InIter1 >::value &&traits::is_projected< Proj2, InIter2 >::value &&traits::is_indirect_callable< F, traits::projected< Proj1, InIter1 >,traits::projected< Proj2, InIter2 > >::value) > 
  unspecified transform(ExPolicy && policy, InIter1 first1, InIter1 last1, 
                        InIter2 first2, OutIter dest, F && f);

Description

Applies the given function f to pairs of elements from two ranges: one defined by [first1, last1) and the other beginning at first2, and stores the result in another range, beginning at dest.

[Note]Note

Complexity: Exactly last - first applications of f

The invocations of f in the parallel transform algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The invocations of f in the parallel transform algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest

Refers to the beginning of the destination range.

f

Specifies the function (or function object) which will be invoked for each of the elements in the sequence specified by [first, last).This is a binary predicate. The signature of this predicate should be equivalent to:

Ret fun(const Type1 &a, const Type2 &b);


The signature does not need to have const&. The types Type1 and Type2 must be such that objects of types InIter1 and InIter2 can be dereferenced and then implicitly converted to Type1 and Type2 respectively. The type Ret must be such that an object of type OutIter can be dereferenced and assigned a value of type Ret.

first1

Refers to the beginning of the first sequence of elements the algorithm will be applied to.

first2

Refers to the beginning of the second sequence of elements the algorithm will be applied to.

last1

Refers to the end of the first sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the invocations of f.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of transform requires F to meet the requirements of CopyConstructible.

InIter1

The type of the source iterators for the first range used (deduced). This iterator type must meet the requirements of an input iterator.

InIter2

The type of the source iterators for the second range used (deduced). This iterator type must meet the requirements of an input iterator.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Proj1

The type of an optional projection function to be used for elements of the first sequence. This defaults to util::projection_identity

Proj2

The type of an optional projection function to be used for elements of the second sequence. This defaults to util::projection_identity

Returns:

The transform algorithm returns a hpx::future<tagged_tuple<tag::in1(InIter1), tag::in2(InIter2), tag::out(OutIter)> > if the execution policy is of type parallel_task_execution_policy and returns tagged_tuple<tag::in1(InIter1), tag::in2(InIter2), tag::out(OutIter)> otherwise. The transform algorithm returns a tuple holding an iterator referring to the first element after the first input sequence, an iterator referring to the first element after the second input sequence, and the output iterator referring to the element in the destination range, one past the last element copied.


Function template transform

hpx::parallel::v1::transform

Synopsis

// In header: <hpx/parallel/algorithms/transform.hpp>


template<typename ExPolicy, typename InIter1, typename InIter2, 
         typename OutIter, typename F, 
         typename Proj1 = util::projection_identity, 
         typename Proj2 = util::projection_identity, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_iterator< InIter1 >::value &&traits::is_iterator< InIter2 >::value &&traits::is_iterator< OutIter >::value &&traits::is_projected< Proj1, InIter1 >::value &&traits::is_projected< Proj2, InIter2 >::value &&traits::is_indirect_callable< F, traits::projected< Proj1, InIter1 >,traits::projected< Proj2, InIter2 > >::value) > 
  unspecified transform(ExPolicy && policy, InIter1 first1, InIter1 last1, 
                        InIter2 first2, InIter2 last2, OutIter dest, F && f);

Description

Applies the given function f to pairs of elements from two ranges: one defined by [first1, last1) and the other beginning at first2, and stores the result in another range, beginning at dest.

[Note]Note

Complexity: Exactly min(last2-first2, last1-first1) applications of f

The invocations of f in the parallel transform algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The invocations of f in the parallel transform algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

[Note]Note

The algorithm will invoke the binary predicate until it reaches the end of the shorter of the two given input sequences

Parameters:

dest

Refers to the beginning of the destination range.

f

Specifies the function (or function object) which will be invoked for each of the elements in the sequence specified by [first, last).This is a binary predicate. The signature of this predicate should be equivalent to:

Ret fun(const Type1 &a, const Type2 &b);


The signature does not need to have const&. The types Type1 and Type2 must be such that objects of types InIter1 and InIter2 can be dereferenced and then implicitly converted to Type1 and Type2 respectively. The type Ret must be such that an object of type OutIter can be dereferenced and assigned a value of type Ret.

first1

Refers to the beginning of the first sequence of elements the algorithm will be applied to.

first2

Refers to the beginning of the second sequence of elements the algorithm will be applied to.

last1

Refers to the end of the first sequence of elements the algorithm will be applied to.

last2

Refers to the end of the second sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the invocations of f.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of transform requires F to meet the requirements of CopyConstructible.

InIter1

The type of the source iterators for the first range used (deduced). This iterator type must meet the requirements of an input iterator.

InIter2

The type of the source iterators for the second range used (deduced). This iterator type must meet the requirements of an input iterator.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Proj1

The type of an optional projection function to be used for elements of the first sequence. This defaults to util::projection_identity

Proj2

The type of an optional projection function to be used for elements of the second sequence. This defaults to util::projection_identity

Returns:

The transform algorithm returns a hpx::future<tagged_tuple<tag::in1(InIter1), tag::in2(InIter2), tag::out(OutIter)> > if the execution policy is of type parallel_task_execution_policy and returns tagged_tuple<tag::in1(InIter1), tag::in2(InIter2), tag::out(OutIter)> otherwise. The transform algorithm returns a tuple holding an iterator referring to the first element after the first input sequence, an iterator referring to the first element after the second input sequence, and the output iterator referring to the element in the destination range, one past the last element copied.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename Rng, typename OutIter, typename F, 
               typename Proj = util::projection_identity, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_iterator< OutIter >::value &&traits::is_projected_range< Proj, Rng >::value &&traits::is_indirect_callable< F, traits::projected_range< Proj, Rng > >::value) > 
        unspecified transform(ExPolicy &&, Rng &&, OutIter, F &&, 
                              Proj && = Proj());
      template<typename ExPolicy, typename Rng, typename InIter2, 
               typename OutIter, typename F, 
               typename Proj1 = util::projection_identity, 
               typename Proj2 = util::projection_identity, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_iterator< InIter2 >::value &&traits::is_iterator< OutIter >::value &&traits::is_projected_range< Proj1, Rng >::value &&traits::is_projected< Proj2, InIter2 >::value &&traits::is_indirect_callable< F, traits::projected_range< Proj1, Rng >,traits::projected< Proj2, InIter2 > >::value) > 
        unspecified transform(ExPolicy &&, Rng &&, InIter2, OutIter, F &&);
      template<typename ExPolicy, typename Rng1, typename Rng2, 
               typename OutIter, typename F, 
               typename Proj1 = util::projection_identity, 
               typename Proj2 = util::projection_identity, 
               HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng1 >::value &&traits::is_range< Rng2 >::value &&traits::is_iterator< OutIter >::value &&traits::is_projected_range< Proj1, Rng1 >::value &&traits::is_projected_range< Proj2, Rng2 >::value &&traits::is_indirect_callable< F, traits::projected_range< Proj1, Rng1 >,traits::projected_range< Proj2, Rng2 > >::value) > 
        unspecified transform(ExPolicy &&, Rng1 &&, Rng2 &&, OutIter, F &&);
    }
  }
}

Function template transform

hpx::parallel::v1::transform

Synopsis

// In header: <hpx/parallel/container_algorithms/transform.hpp>


template<typename ExPolicy, typename Rng, typename OutIter, typename F, 
         typename Proj = util::projection_identity, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_iterator< OutIter >::value &&traits::is_projected_range< Proj, Rng >::value &&traits::is_indirect_callable< F, traits::projected_range< Proj, Rng > >::value) > 
  unspecified transform(ExPolicy && policy, Rng && rng, OutIter dest, F && f, 
                        Proj && proj = Proj());

Description

Applies the given function f to the given range rng and stores the result in another range, beginning at dest.

[Note]Note

Complexity: Exactly size(rng) applications of f

The invocations of f in the parallel transform algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The invocations of f in the parallel transform algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest

Refers to the beginning of the destination range.

f

Specifies the function (or function object) which will be invoked for each of the elements in the sequence specified by [first, last).This is an unary predicate. The signature of this predicate should be equivalent to:

Ret fun(const Type &a);


The signature does not need to have const&. The type Type must be such that an object of type InIter can be dereferenced and then implicitly converted to Type. The type Ret must be such that an object of type OutIter can be dereferenced and assigned a value of type Ret.

policy

The execution policy to use for the scheduling of the iterations.

proj

Specifies the function (or function object) which will be invoked for each of the elements as a projection operation before the actual predicate f is invoked.

rng

Refers to the sequence of elements the algorithm will be applied to.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the invocations of f.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of transform requires F to meet the requirements of CopyConstructible.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Proj

The type of an optional projection function. This defaults to util::projection_identity

Rng

The type of the source range used (deduced). The iterators extracted from this range type must meet the requirements of an input iterator.

Returns:

The transform algorithm returns a hpx::future<tagged_pair<tag::in(InIter), tag::out(OutIter)> > if the execution policy is of type parallel_task_execution_policy and returns tagged_pair<tag::in(InIter), tag::out(OutIter)> otherwise. The transform algorithm returns a tuple holding an iterator referring to the first element after the input sequence and the output iterator to the element in the destination range, one past the last element copied.


Function template transform

hpx::parallel::v1::transform

Synopsis

// In header: <hpx/parallel/container_algorithms/transform.hpp>


template<typename ExPolicy, typename Rng, typename InIter2, typename OutIter, 
         typename F, typename Proj1 = util::projection_identity, 
         typename Proj2 = util::projection_identity, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng >::value &&traits::is_iterator< InIter2 >::value &&traits::is_iterator< OutIter >::value &&traits::is_projected_range< Proj1, Rng >::value &&traits::is_projected< Proj2, InIter2 >::value &&traits::is_indirect_callable< F, traits::projected_range< Proj1, Rng >,traits::projected< Proj2, InIter2 > >::value) > 
  unspecified transform(ExPolicy && policy, Rng && rng, InIter2 first2, 
                        OutIter dest, F && f);

Description

Applies the given function f to pairs of elements from two ranges: one defined by rng and the other beginning at first2, and stores the result in another range, beginning at dest.

[Note]Note

Complexity: Exactly size(rng) applications of f

The invocations of f in the parallel transform algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The invocations of f in the parallel transform algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest

Refers to the beginning of the destination range.

f

Specifies the function (or function object) which will be invoked for each of the elements in the sequence specified by [first, last).This is a binary predicate. The signature of this predicate should be equivalent to:

Ret fun(const Type1 &a, const Type2 &b);


The signature does not need to have const&. The types Type1 and Type2 must be such that objects of types InIter1 and InIter2 can be dereferenced and then implicitly converted to Type1 and Type2 respectively. The type Ret must be such that an object of type OutIter can be dereferenced and assigned a value of type Ret.

first2

Refers to the beginning of the second sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

rng

Refers to the sequence of elements the algorithm will be applied to.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the invocations of f.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of transform requires F to meet the requirements of CopyConstructible.

InIter2

The type of the source iterators for the second range used (deduced). This iterator type must meet the requirements of an input iterator.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Proj1

The type of an optional projection function to be used for elements of the first sequence. This defaults to util::projection_identity

Proj2

The type of an optional projection function to be used for elements of the second sequence. This defaults to util::projection_identity

Rng

The type of the source range used (deduced). The iterators extracted from this range type must meet the requirements of an input iterator.

Returns:

The transform algorithm returns a hpx::future<tagged_tuple<tag::in1(InIter1), tag::in2(InIter2), tag::out(OutIter)> > if the execution policy is of type parallel_task_execution_policy and returns tagged_tuple<tag::in1(InIter1), tag::in2(InIter2), tag::out(OutIter)> otherwise. The transform algorithm returns a tuple holding an iterator referring to the first element after the first input sequence, an iterator referring to the first element after the second input sequence, and the output iterator referring to the element in the destination range, one past the last element copied.


Function template transform

hpx::parallel::v1::transform

Synopsis

// In header: <hpx/parallel/container_algorithms/transform.hpp>


template<typename ExPolicy, typename Rng1, typename Rng2, typename OutIter, 
         typename F, typename Proj1 = util::projection_identity, 
         typename Proj2 = util::projection_identity, 
         HPX_CONCEPT_REQUIRES_(is_execution_policy< ExPolicy >::value &&traits::is_range< Rng1 >::value &&traits::is_range< Rng2 >::value &&traits::is_iterator< OutIter >::value &&traits::is_projected_range< Proj1, Rng1 >::value &&traits::is_projected_range< Proj2, Rng2 >::value &&traits::is_indirect_callable< F, traits::projected_range< Proj1, Rng1 >,traits::projected_range< Proj2, Rng2 > >::value) > 
  unspecified transform(ExPolicy && policy, Rng1 && rng1, Rng2 && rng2, 
                        OutIter dest, F && f);

Description

Applies the given function f to pairs of elements from two ranges: one defined by [first1, last1) and the other beginning at first2, and stores the result in another range, beginning at dest.

[Note]Note

Complexity: Exactly min(last2-first2, last1-first1) applications of f

The invocations of f in the parallel transform algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The invocations of f in the parallel transform algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

[Note]Note

The algorithm will invoke the binary predicate until it reaches the end of the shorter of the two given input sequences

Parameters:

dest

Refers to the beginning of the destination range.

f

Specifies the function (or function object) which will be invoked for each of the elements in the sequence specified by [first, last).This is a binary predicate. The signature of this predicate should be equivalent to:

Ret fun(const Type1 &a, const Type2 &b);


The signature does not need to have const&. The types Type1 and Type2 must be such that objects of types InIter1 and InIter2 can be dereferenced and then implicitly converted to Type1 and Type2 respectively. The type Ret must be such that an object of type OutIter can be dereferenced and assigned a value of type Ret.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the invocations of f.

F

The type of the function/function object to use (deduced). Unlike its sequential form, the parallel overload of transform requires F to meet the requirements of CopyConstructible.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Proj1

The type of an optional projection function to be used for elements of the first sequence. This defaults to util::projection_identity

Proj2

The type of an optional projection function to be used for elements of the second sequence. This defaults to util::projection_identity

Returns:

The transform algorithm returns a hpx::future<tagged_tuple<tag::in1(InIter1), tag::in2(InIter2), tag::out(OutIter)> > if the execution policy is of type parallel_task_execution_policy and returns tagged_tuple<tag::in1(InIter1), tag::in2(InIter2), tag::out(OutIter)> otherwise. The transform algorithm returns a tuple holding an iterator referring to the first element after the first input sequence, an iterator referring to the first element after the second input sequence, and the output iterator referring to the element in the destination range, one past the last element copied.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter, typename OutIter, 
               typename Conv, typename T, typename Op> 
        unspecified transform_exclusive_scan(ExPolicy &&, InIter, InIter, 
                                             OutIter, Conv &&, T, Op &&);
    }
  }
}

Function template transform_exclusive_scan

hpx::parallel::v1::transform_exclusive_scan

Synopsis

// In header: <hpx/parallel/algorithms/transform_exclusive_scan.hpp>


template<typename ExPolicy, typename InIter, typename OutIter, typename Conv, 
         typename T, typename Op> 
  unspecified transform_exclusive_scan(ExPolicy && policy, InIter first, 
                                       InIter last, OutIter dest, 
                                       Conv && conv, T init, Op && op);

Description

Assigns through each iterator i in [result, result + (last - first)) the value of GENERALIZED_NONCOMMUTATIVE_SUM(binary_op, init, conv(*first), ..., conv(*(first + (i - result) - 1))).

[Note]Note

Complexity: O(last - first) applications of the predicates op and conv.

The reduce operations in the parallel transform_exclusive_scan algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The reduce operations in the parallel transform_exclusive_scan algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

[Note]Note

GENERALIZED_NONCOMMUTATIVE_SUM(op, a1, ..., aN) is defined as:

  • a1 when N is 1

  • op(GENERALIZED_NONCOMMUTATIVE_SUM(op, a1, ..., aK), GENERALIZED_NONCOMMUTATIVE_SUM(op, aM, ..., aN) where 1 < K+1 = M <= N.

Neither conv nor op shall invalidate iterators or subranges, or modify elements in the ranges [first,last) or [result,result + (last - first)).

The behavior of transform_exclusive_scan may be non-deterministic for a non-associative predicate.

Parameters:

conv

Specifies the function (or function object) which will be invoked for each of the elements in the sequence specified by [first, last). This is a unary predicate. The signature of this predicate should be equivalent to:

R fun(const Type &a);


The signature does not need to have const&, but the function must not modify the objects passed to it. The type Type must be such that an object of type InIter can be dereferenced and then implicitly converted to Type. The type R must be such that an object of this type can be implicitly converted to T.

dest

Refers to the beginning of the destination range.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

init

The initial value for the generalized sum.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

op

Specifies the function (or function object) which will be invoked for each of the values of the input sequence. This is a binary predicate. The signature of this predicate should be equivalent to:

Ret fun(const Type1 &a, const Type1 &b);


The signature does not need to have const&, but the function must not modify the objects passed to it. The types Type1 and Ret must be such that an object of a type as given by the input sequence can be implicitly converted to any of those types.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

Conv

The type of the unary function object used for the conversion operation.

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

Op

The type of the binary function object used for the reduction operation.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

T

The type of the value to be used as initial (and intermediate) values (deduced).

Returns:

The copy_n algorithm returns a hpx::future<OutIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns OutIter otherwise. The transform_exclusive_scan algorithm returns the output iterator to the element in the destination range, one past the last element copied.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter, typename OutIter, 
               typename Conv, typename T, typename Op> 
        unspecified transform_inclusive_scan(ExPolicy &&, InIter, InIter, 
                                             OutIter, Conv &&, T, Op &&);
      template<typename ExPolicy, typename InIter, typename OutIter, 
               typename Conv, typename Op> 
        unspecified transform_inclusive_scan(ExPolicy &&, InIter, InIter, 
                                             OutIter, Conv &&, Op &&);
    }
  }
}

Function template transform_inclusive_scan

hpx::parallel::v1::transform_inclusive_scan

Synopsis

// In header: <hpx/parallel/algorithms/transform_inclusive_scan.hpp>


template<typename ExPolicy, typename InIter, typename OutIter, typename Conv, 
         typename T, typename Op> 
  unspecified transform_inclusive_scan(ExPolicy && policy, InIter first, 
                                       InIter last, OutIter dest, 
                                       Conv && conv, T init, Op && op);

Description

Assigns through each iterator i in [result, result + (last - first)) the value of GENERALIZED_NONCOMMUTATIVE_SUM(op, init, conv(*first), ..., conv(*(first + (i - result)))).

[Note]Note

Complexity: O(last - first) applications of the predicate op.

The reduce operations in the parallel transform_inclusive_scan algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The reduce operations in the parallel transform_inclusive_scan algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

[Note]Note

GENERALIZED_NONCOMMUTATIVE_SUM(op, a1, ..., aN) is defined as:

  • a1 when N is 1

  • op(GENERALIZED_NONCOMMUTATIVE_SUM(op, a1, ..., aK), GENERALIZED_NONCOMMUTATIVE_SUM(op, aM, ..., aN)) where 1 < K+1 = M <= N.

Neither conv nor op shall invalidate iterators or subranges, or modify elements in the ranges [first,last) or [result,result + (last - first)).

The difference between exclusive_scan and transform_inclusive_scan is that transform_inclusive_scan includes the ith input element in the ith sum. If op is not mathematically associative, the behavior of transform_inclusive_scan may be non-deterministic.

Parameters:

conv

Specifies the function (or function object) which will be invoked for each of the elements in the sequence specified by [first, last). This is a unary predicate. The signature of this predicate should be equivalent to:

R fun(const Type &a);


The signature does not need to have const&, but the function must not modify the objects passed to it. The type Type must be such that an object of type InIter can be dereferenced and then implicitly converted to Type. The type R must be such that an object of this type can be implicitly converted to T.

dest

Refers to the beginning of the destination range.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

init

The initial value for the generalized sum.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

op

Specifies the function (or function object) which will be invoked for each of the values of the input sequence. This is a binary predicate. The signature of this predicate should be equivalent to:

Ret fun(const Type1 &a, const Type1 &b);


The signature does not need to have const&, but the function must not modify the objects passed to it. The types Type1 and Ret must be such that an object of a type as given by the input sequence can be implicitly converted to any of those types.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

Conv

The type of the unary function object used for the conversion operation.

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

Op

The type of the binary function object used for the reduction operation.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

T

The type of the value to be used as initial (and intermediate) values (deduced).

Returns:

The copy_n algorithm returns a hpx::future<OutIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns OutIter otherwise. The transform_inclusive_scan algorithm returns the output iterator to the element in the destination range, one past the last element copied.


Function template transform_inclusive_scan

hpx::parallel::v1::transform_inclusive_scan

Synopsis

// In header: <hpx/parallel/algorithms/transform_inclusive_scan.hpp>


template<typename ExPolicy, typename InIter, typename OutIter, typename Conv, 
         typename Op> 
  unspecified transform_inclusive_scan(ExPolicy && policy, InIter first, 
                                       InIter last, OutIter dest, 
                                       Conv && conv, Op && op);

Description

Assigns through each iterator i in [result, result + (last - first)) the value of GENERALIZED_NONCOMMUTATIVE_SUM(op, conv(*first), ..., conv(*(first + (i - result)))).

[Note]Note

Complexity: O(last - first) applications of the predicate op.

The reduce operations in the parallel transform_inclusive_scan algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The reduce operations in the parallel transform_inclusive_scan algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

[Note]Note

GENERALIZED_NONCOMMUTATIVE_SUM(op, a1, ..., aN) is defined as:

  • a1 when N is 1

  • op(GENERALIZED_NONCOMMUTATIVE_SUM(op, a1, ..., aK), GENERALIZED_NONCOMMUTATIVE_SUM(op, aM, ..., aN)) where 1 < K+1 = M <= N.

Neither conv nor op shall invalidate iterators or subranges, or modify elements in the ranges [first,last) or [result,result + (last - first)).

The difference between exclusive_scan and transform_inclusive_scan is that transform_inclusive_scan includes the ith input element in the ith sum.

Parameters:

conv

Specifies the function (or function object) which will be invoked for each of the elements in the sequence specified by [first, last). This is a unary predicate. The signature of this predicate should be equivalent to:

R fun(const Type &a);


The signature does not need to have const&, but the function must not modify the objects passed to it. The type Type must be such that an object of type InIter can be dereferenced and then implicitly converted to Type. The type R must be such that an object of this type can be implicitly converted to T.

dest

Refers to the beginning of the destination range.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

op

Specifies the function (or function object) which will be invoked for each of the values of the input sequence. This is a binary predicate. The signature of this predicate should be equivalent to:

Ret fun(const Type1 &a, const Type1 &b);


The signature does not need to have const&, but the function must not modify the objects passed to it. The types Type1 and Ret must be such that an object of a type as given by the input sequence can be implicitly converted to any of those types.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

Conv

The type of the unary function object used for the conversion operation.

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

Op

The type of the binary function object used for the reduction operation.

OutIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of an output iterator.

Returns:

The copy_n algorithm returns a hpx::future<OutIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns OutIter otherwise. The transform_inclusive_scan algorithm returns the output iterator to the element in the destination range, one past the last element copied.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter, typename T, 
               typename Reduce, typename Convert> 
        unspecified transform_reduce(ExPolicy &&, InIter, InIter, Convert &&, 
                                     T, Reduce &&);
    }
  }
}

Function template transform_reduce

hpx::parallel::v1::transform_reduce

Synopsis

// In header: <hpx/parallel/algorithms/transform_reduce.hpp>


template<typename ExPolicy, typename InIter, typename T, typename Reduce, 
         typename Convert> 
  unspecified transform_reduce(ExPolicy && policy, InIter first, InIter last, 
                               Convert && conv_op, T init, Reduce && red_op);

Description

Returns GENERALIZED_SUM(red_op, init, conv_op(*first), ..., conv_op(*(first + (last - first) - 1))).

[Note]Note

Complexity: O(last - first) applications of the predicates red_op and conv_op.

The reduce operations in the parallel transform_reduce algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The reduce operations in the parallel transform_reduce algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

[Note]Note

GENERALIZED_SUM(op, a1, ..., aN) is defined as follows:

  • a1 when N is 1

  • op(GENERALIZED_SUM(op, b1, ..., bK), GENERALIZED_SUM(op, bM, ..., bN)), where:

    • b1, ..., bN may be any permutation of a1, ..., aN and

    • 1 < K+1 = M <= N.

The difference between transform_reduce and accumulate is that the behavior of transform_reduce may be non-deterministic for non-associative or non-commutative binary predicate.

Parameters:

conv_op

Specifies the function (or function object) which will be invoked for each of the elements in the sequence specified by [first, last). This is a unary predicate. The signature of this predicate should be equivalent to:

R fun(const Type &a);


The signature does not need to have const&, but the function must not modify the objects passed to it. The type Type must be such that an object of type InIter can be dereferenced and then implicitly converted to Type. The type R must be such that an object of this type can be implicitly converted to T.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

init

The initial value for the generalized sum.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

red_op

Specifies the function (or function object) which will be invoked for each of the values returned from the invocation of conv_op. This is a binary predicate. The signature of this predicate should be equivalent to:

Ret fun(const Type1 &a, const Type2 &b);


The signature does not need to have const&, but the function must not modify the objects passed to it. The types Type1, Type2, and Ret must be such that an object of a type as returned from conv_op can be implicitly converted to any of those types.

Template Parameters:

Convert

The type of the unary function object used to transform the elements of the input sequence before invoking the reduce function.

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

Reduce

The type of the binary function object used for the reduction operation.

T

The type of the value to be used as initial (and intermediate) values (deduced).

Returns:

The transform_reduce algorithm returns a hpx::future<T> if the execution policy is of type parallel_task_execution_policy and returns T otherwise. The transform_reduce algorithm returns the result of the generalized sum over the values returned from conv_op when applied to the elements given by the input range [first, last).

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter, typename FwdIter> 
        unspecified uninitialized_copy(ExPolicy &&, InIter, InIter, FwdIter);
      template<typename ExPolicy, typename InIter, typename Size, 
               typename FwdIter> 
        unspecified uninitialized_copy_n(ExPolicy &&, InIter, Size, FwdIter);
    }
  }
}

Function template uninitialized_copy

hpx::parallel::v1::uninitialized_copy

Synopsis

// In header: <hpx/parallel/algorithms/uninitialized_copy.hpp>


template<typename ExPolicy, typename InIter, typename FwdIter> 
  unspecified uninitialized_copy(ExPolicy && policy, InIter first, 
                                 InIter last, FwdIter dest);

Description

Copies the elements in the range, defined by [first, last), to an uninitialized memory area beginning at dest. If an exception is thrown during the copy operation, the function has no effects.

[Note]Note

Complexity: Performs exactly last - first assignments.

The assignments in the parallel uninitialized_copy algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel uninitialized_copy algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

dest

Refers to the beginning of the destination range.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

FwdIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of a forward iterator.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

Returns:

The uninitialized_copy algorithm returns a hpx::future<FwdIter>, if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns FwdIter otherwise. The uninitialized_copy algorithm returns the output iterator to the element in the destination range, one past the last element copied.


Function template uninitialized_copy_n

hpx::parallel::v1::uninitialized_copy_n

Synopsis

// In header: <hpx/parallel/algorithms/uninitialized_copy.hpp>


template<typename ExPolicy, typename InIter, typename Size, typename FwdIter> 
  unspecified uninitialized_copy_n(ExPolicy && policy, InIter first, 
                                   Size count, FwdIter dest);

Description

Copies the elements in the range [first, first + count), starting from first and proceeding to first + count - 1., to another range beginning at dest. If an exception is thrown during the copy operation, the function has no effects.

[Note]Note

Complexity: Performs exactly count assignments, if count > 0, no assignments otherwise.

The assignments in the parallel uninitialized_copy_n algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The assignments in the parallel uninitialized_copy_n algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

count

Refers to the number of elements starting at first the algorithm will be applied to.

dest

Refers to the beginning of the destination range.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

FwdIter

The type of the iterator representing the destination range (deduced). This iterator type must meet the requirements of a forward iterator.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

Size

The type of the argument specifying the number of elements to apply f to.

Returns:

The uninitialized_copy_n algorithm returns a hpx::future<FwdIter> if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns FwdIter otherwise. The uninitialized_copy_n algorithm returns the output iterator to the element in the destination range, one past the last element copied.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExPolicy, typename InIter, typename T> 
        unspecified uninitialized_fill(ExPolicy &&, InIter, InIter, T const &);
      template<typename ExPolicy, typename FwdIter, typename Size, typename T> 
        unspecified uninitialized_fill_n(ExPolicy &&, FwdIter, Size, 
                                         T const &);
    }
  }
}

Function template uninitialized_fill

hpx::parallel::v1::uninitialized_fill

Synopsis

// In header: <hpx/parallel/algorithms/uninitialized_fill.hpp>


template<typename ExPolicy, typename InIter, typename T> 
  unspecified uninitialized_fill(ExPolicy && policy, InIter first, 
                                 InIter last, T const & value);

Description

Copies the given value to an uninitialized memory area, defined by the range [first, last). If an exception is thrown during the initialization, the function has no effects.

[Note]Note

Complexity: Linear in the distance between first and last

The initializations in the parallel uninitialized_fill algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The initializations in the parallel uninitialized_fill algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

last

Refers to the end of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

value

The value to be assigned.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

InIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of an input iterator.

T

The type of the value to be assigned (deduced).

Returns:

The uninitialized_fill algorithm returns a hpx::future<void>, if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns nothing otherwise.


Function template uninitialized_fill_n

hpx::parallel::v1::uninitialized_fill_n

Synopsis

// In header: <hpx/parallel/algorithms/uninitialized_fill.hpp>


template<typename ExPolicy, typename FwdIter, typename Size, typename T> 
  unspecified uninitialized_fill_n(ExPolicy && policy, FwdIter first, 
                                   Size count, T const & value);

Description

Copies the given value value to the first count elements in an uninitialized memory area beginning at first. If an exception is thrown during the initialization, the function has no effects.

[Note]Note

Complexity: Performs exactly count assignments, if count > 0, no assignments otherwise.

The initializations in the parallel uninitialized_fill_n algorithm invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The initializations in the parallel uninitialized_fill_n algorithm invoked with an execution policy object of type parallel_execution_policy or parallel_task_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and indeterminately sequenced within each thread.

Parameters:

count

Refers to the number of elements starting at first the algorithm will be applied to.

first

Refers to the beginning of the sequence of elements the algorithm will be applied to.

policy

The execution policy to use for the scheduling of the iterations.

value

The value to be assigned.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the algorithm may be parallelized and the manner in which it executes the assignments.

FwdIter

The type of the source iterators used (deduced). This iterator type must meet the requirements of a forward iterator.

Size

The type of the argument specifying the number of elements to apply f to.

T

The type of the value to be assigned (deduced).

Returns:

The uninitialized_fill_n algorithm returns a hpx::future<void>, if the execution policy is of type sequential_task_execution_policy or parallel_task_execution_policy and returns nothing otherwise.

namespace hpx {
  namespace parallel {
    namespace v1 {
      template<typename ExecutionPolicy, typename Executor, 
               typename Parameters> 
        struct rebind_executor;
      struct sequential_task_execution_policy;
      template<typename Executor, typename Parameters> 
        struct sequential_task_execution_policy_shim;
      struct sequential_execution_policy;
      template<typename Executor, typename Parameters> 
        struct sequential_execution_policy_shim;
      struct parallel_task_execution_policy;
      template<typename Executor, typename Parameters> 
        struct parallel_task_execution_policy_shim;
      struct parallel_execution_policy;
      template<typename Executor, typename Parameters> 
        struct parallel_execution_policy_shim;
      struct parallel_vector_execution_policy;
      template<typename T> struct is_rebound_execution_policy;
      template<typename T> struct is_execution_policy;
      template<typename T> struct is_parallel_execution_policy;
      template<typename T> struct is_sequential_execution_policy;
      template<typename T> struct is_async_execution_policy;

      class execution_policy;

      static task_execution_policy_tag const task;
      static sequential_execution_policy const seq;      // Default sequential execution policy object. 
      static parallel_execution_policy const par;      // Default parallel execution policy object. 
      static parallel_vector_execution_policy const par_vec;      // Default vector execution policy object. 
    }
  }
}

Struct template rebind_executor

hpx::parallel::v1::rebind_executor

Synopsis

// In header: <hpx/parallel/execution_policy.hpp>

template<typename ExecutionPolicy, typename Executor, typename Parameters> 
struct rebind_executor {
  // types
  typedef ExecutionPolicy::template rebind< executor_type, parameters_type >::type type;  // The type of the rebound execution policy. 
};

Description

/** Rebind the type of executor used by an execution policy. The execution category of Executor shall not be weaker than that of ExecutionPolicy.


Struct sequential_task_execution_policy

hpx::parallel::v1::sequential_task_execution_policy

Synopsis

// In header: <hpx/parallel/execution_policy.hpp>


struct sequential_task_execution_policy {
  // types
  typedef parallel::sequential_executor      executor_type;             // The type of the executor associated with this execution policy. 
  typedef unspecified                        executor_parameters_type;
  typedef parallel::sequential_execution_tag execution_category;      

  // member classes/structs/unions
  template<typename Executor_, typename Parameters_> 
  struct rebind {
    // types
    typedef sequential_task_execution_policy_shim< Executor_, Parameters_ > type;  // The type of the rebound execution policy. 
  };

  // public member functions
  sequential_task_execution_policy operator()(task_execution_policy_tag) const;
  template<typename Executor> 
    rebind_executor< sequential_task_execution_policy, Executor, executor_parameters_type >::type 
    on(Executor &&) const;
  template<typename Parameters> 
    rebind_executor< sequential_task_execution_policy, executor_type, Parameters >::type 
    with(Parameters &&) const;

  // public static functions
  static executor_type & executor();
  static executor_parameters_type & parameters();
};

Description

Extension: The class sequential_task_execution_policy is an execution policy type used as a unique type to disambiguate parallel algorithm overloading and indicate that a parallel algorithm's execution may not be parallelized (has to run sequentially).

The algorithm returns a future representing the result of the corresponding algorithm when invoked with the sequential_execution_policy.

sequential_task_execution_policy public types

  1. typedef unspecified executor_parameters_type;

    The type of the associated executor parameters object which is associated with this execution policy

  2. typedef parallel::sequential_execution_tag execution_category;

    The category of the execution agents created by this execution policy.

sequential_task_execution_policy public member functions

  1. sequential_task_execution_policy 
    operator()(task_execution_policy_tag tag) const;

    Create a new sequential_task_execution_policy from itself

    Parameters:

    tag

    [in] Specify that the corresponding asynchronous execution policy should be used

    Returns:

    The new sequential_task_execution_policy

  2. template<typename Executor> 
      rebind_executor< sequential_task_execution_policy, Executor, executor_parameters_type >::type 
      on(Executor && exec) const;

    Create a new sequential_task_execution_policy from the given executor

    [Note]Note

    Requires: is_executor<Executor>::value is true

    Parameters:

    exec

    [in] The executor to use for the execution of the parallel algorithm the returned execution policy is used with.

    Template Parameters:

    Executor

    The type of the executor to associate with this execution policy.

    Returns:

    The new sequential_task_execution_policy

  3. template<typename Parameters> 
      rebind_executor< sequential_task_execution_policy, executor_type, Parameters >::type 
      with(Parameters && params) const;

    Create a new sequential_task_execution_policy from the given execution parameters

    [Note]Note

    Requires: is_executor_parameters<Parameters>::value is true

    Parameters:

    params

    [in] The executor parameters to use for the execution of the parallel algorithm the returned execution policy is used with.

    Template Parameters:

    Parameters

    The type of the executor parameters to associate with this execution policy.

    Returns:

    The new sequential_task_execution_policy

sequential_task_execution_policy public static functions

  1. static executor_type & executor();
    Return the associated executor object.
  2. static executor_parameters_type & parameters();
    Return the associated executor parameters object.

Struct template rebind

hpx::parallel::v1::sequential_task_execution_policy::rebind

Synopsis

// In header: <hpx/parallel/execution_policy.hpp>


template<typename Executor_, typename Parameters_> 
struct rebind {
  // types
  typedef sequential_task_execution_policy_shim< Executor_, Parameters_ > type;  // The type of the rebound execution policy. 
};

Description

Rebind the type of executor used by this execution policy. The execution category of Executor shall not be weaker than that of this execution policy


Struct template sequential_task_execution_policy_shim

hpx::parallel::v1::sequential_task_execution_policy_shim

Synopsis

// In header: <hpx/parallel/execution_policy.hpp>

template<typename Executor, typename Parameters> 
struct sequential_task_execution_policy_shim :
  public hpx::parallel::v1::sequential_task_execution_policy
{
  // types
  typedef Executor                                             executor_type;             // The type of the executor associated with this execution policy. 
  typedef Parameters                                           executor_parameters_type;
  typedef executor_traits< executor_type >::execution_category execution_category;      

  // member classes/structs/unions
  template<typename Executor_, typename Parameters_> 
  struct rebind {
    // types
    typedef sequential_task_execution_policy_shim< Executor_, Parameters_ > type;  // The type of the rebound execution policy. 
  };

  // public member functions
  sequential_task_execution_policy_shim const & 
  operator()(task_execution_policy_tag) const;
  template<typename Executor_> 
    rebind_executor< sequential_task_execution_policy_shim, Executor_, executor_parameters_type >::type 
    on(Executor_ &&) const;
  template<typename Parameters_> 
    rebind_executor< sequential_task_execution_policy_shim, Executor, Parameters_ >::type 
    with(Parameters_ &&) const;
  Executor & executor();
  Executor const & executor() const;
  Parameters & parameters();
  Parameters const & parameters() const;
};

Description

Extension: The class sequential_task_execution_policy_shim is an execution policy type used as a unique type to disambiguate parallel algorithm overloading based on combining a underlying sequential_task_execution_policy and an executor and indicate that a parallel algorithm's execution may not be parallelized (has to run sequentially).

The algorithm returns a future representing the result of the corresponding algorithm when invoked with the sequential_execution_policy.

sequential_task_execution_policy_shim public types

  1. typedef Parameters executor_parameters_type;

    The type of the associated executor parameters object which is associated with this execution policy

  2. typedef executor_traits< executor_type >::execution_category execution_category;

    The category of the execution agents created by this execution policy.

sequential_task_execution_policy_shim public member functions

  1. sequential_task_execution_policy_shim const & 
    operator()(task_execution_policy_tag tag) const;

    Create a new sequential_task_execution_policy from itself

    Parameters:

    tag

    [in] Specify that the corresponding asynchronous execution policy should be used

    Returns:

    The new sequential_task_execution_policy

  2. template<typename Executor_> 
      rebind_executor< sequential_task_execution_policy_shim, Executor_, executor_parameters_type >::type 
      on(Executor_ && exec) const;

    Create a new sequential_task_execution_policy from the given executor

    [Note]Note

    Requires: is_executor<Executor>::value is true

    Parameters:

    exec

    [in] The executor to use for the execution of the parallel algorithm the returned execution policy is used with.

    Returns:

    The new sequential_task_execution_policy

  3. template<typename Parameters_> 
      rebind_executor< sequential_task_execution_policy_shim, Executor, Parameters_ >::type 
      with(Parameters_ && params) const;

    Create a new sequential_task_execution_policy from the given execution parameters

    [Note]Note

    Requires: is_executor_parameters<Parameters>::value is true

    Parameters:

    params

    [in] The executor parameters to use for the execution of the parallel algorithm the returned execution policy is used with.

    Returns:

    The new sequential_task_execution_policy

  4. Executor & executor();
    Return the associated executor object.
  5. Executor const & executor() const;
    Return the associated executor object.
  6. Parameters & parameters();
    Return the associated executor parameters object.
  7. Parameters const & parameters() const;
    Return the associated executor parameters object.

Struct template rebind

hpx::parallel::v1::sequential_task_execution_policy_shim::rebind

Synopsis

// In header: <hpx/parallel/execution_policy.hpp>


template<typename Executor_, typename Parameters_> 
struct rebind {
  // types
  typedef sequential_task_execution_policy_shim< Executor_, Parameters_ > type;  // The type of the rebound execution policy. 
};

Description

Rebind the type of executor used by this execution policy. The execution category of Executor shall not be weaker than that of this execution policy


Struct sequential_execution_policy

hpx::parallel::v1::sequential_execution_policy

Synopsis

// In header: <hpx/parallel/execution_policy.hpp>


struct sequential_execution_policy {
  // types
  typedef parallel::sequential_executor      executor_type;             // The type of the executor associated with this execution policy. 
  typedef unspecified                        executor_parameters_type;
  typedef parallel::sequential_execution_tag execution_category;      

  // member classes/structs/unions
  template<typename Executor_, typename Parameters_> 
  struct rebind {
    // types
    typedef sequential_execution_policy_shim< Executor_, Parameters_ > type;  // The type of the rebound execution policy. 
  };

  // public member functions
  sequential_task_execution_policy operator()(task_execution_policy_tag) const;
  template<typename Executor> 
    rebind_executor< sequential_execution_policy, Executor, executor_parameters_type >::type 
    on(Executor &&) const;
  template<typename Parameters> 
    rebind_executor< sequential_execution_policy, executor_type, Parameters >::type 
    with(Parameters &&) const;

  // public static functions
  static executor_type & executor();
  static executor_parameters_type & parameters();
};

Description

The class sequential_execution_policy is an execution policy type used as a unique type to disambiguate parallel algorithm overloading and require that a parallel algorithm's execution may not be parallelized.

sequential_execution_policy public types

  1. typedef unspecified executor_parameters_type;

    The type of the associated executor parameters object which is associated with this execution policy

  2. typedef parallel::sequential_execution_tag execution_category;

    The category of the execution agents created by this execution policy.

sequential_execution_policy public member functions

  1. sequential_task_execution_policy 
    operator()(task_execution_policy_tag tag) const;

    Create a new sequential_task_execution_policy.

    Parameters:

    tag

    [in] Specify that the corresponding asynchronous execution policy should be used

    Returns:

    The new sequential_task_execution_policy

  2. template<typename Executor> 
      rebind_executor< sequential_execution_policy, Executor, executor_parameters_type >::type 
      on(Executor && exec) const;

    Create a new sequential_execution_policy from the given executor

    [Note]Note

    Requires: is_executor<Executor>::value is true

    Parameters:

    exec

    [in] The executor to use for the execution of the parallel algorithm the returned execution policy is used with.

    Template Parameters:

    Executor

    The type of the executor to associate with this execution policy.

    Returns:

    The new sequential_execution_policy

  3. template<typename Parameters> 
      rebind_executor< sequential_execution_policy, executor_type, Parameters >::type 
      with(Parameters && params) const;

    Create a new sequential_execution_policy from the given execution parameters

    [Note]Note

    Requires: is_executor_parameters<Parameters>::value is true

    Parameters:

    params

    [in] The executor parameters to use for the execution of the parallel algorithm the returned execution policy is used with.

    Template Parameters:

    Parameters

    The type of the executor parameters to associate with this execution policy.

    Returns:

    The new sequential_execution_policy

sequential_execution_policy public static functions

  1. static executor_type & executor();
    Return the associated executor object.
  2. static executor_parameters_type & parameters();
    Return the associated executor parameters object.

Struct template rebind

hpx::parallel::v1::sequential_execution_policy::rebind

Synopsis

// In header: <hpx/parallel/execution_policy.hpp>


template<typename Executor_, typename Parameters_> 
struct rebind {
  // types
  typedef sequential_execution_policy_shim< Executor_, Parameters_ > type;  // The type of the rebound execution policy. 
};

Description

Rebind the type of executor used by this execution policy. The execution category of Executor shall not be weaker than that of this execution policy


Struct template sequential_execution_policy_shim

hpx::parallel::v1::sequential_execution_policy_shim

Synopsis

// In header: <hpx/parallel/execution_policy.hpp>

template<typename Executor, typename Parameters> 
struct sequential_execution_policy_shim :
  public hpx::parallel::v1::sequential_execution_policy
{
  // types
  typedef Executor                                             executor_type;             // The type of the executor associated with this execution policy. 
  typedef Parameters                                           executor_parameters_type;
  typedef executor_traits< executor_type >::execution_category execution_category;      

  // member classes/structs/unions
  template<typename Executor_, typename Parameters_> 
  struct rebind {
    // types
    typedef sequential_execution_policy_shim< Executor_, Parameters_ > type;  // The type of the rebound execution policy. 
  };

  // public member functions
  sequential_task_execution_policy_shim< Executor, Parameters > 
  operator()(task_execution_policy_tag) const;
  template<typename Executor_> 
    rebind_executor< sequential_execution_policy_shim, Executor_, executor_parameters_type >::type 
    on(Executor_ &&) const;
  template<typename Parameters_> 
    rebind_executor< sequential_execution_policy_shim, executor_type, Parameters_ >::type 
    with(Parameters_ &) const;
  Executor & executor();
  Executor const & executor() const;
  Parameters & parameters();
  Parameters const & parameters() const;
};

Description

The class sequential_execution_policy is an execution policy type used as a unique type to disambiguate parallel algorithm overloading and require that a parallel algorithm's execution may not be parallelized.

sequential_execution_policy_shim public types

  1. typedef Parameters executor_parameters_type;

    The type of the associated executor parameters object which is associated with this execution policy

  2. typedef executor_traits< executor_type >::execution_category execution_category;

    The category of the execution agents created by this execution policy.

sequential_execution_policy_shim public member functions

  1. sequential_task_execution_policy_shim< Executor, Parameters > 
    operator()(task_execution_policy_tag tag) const;

    Create a new sequential_task_execution_policy.

    Parameters:

    tag

    [in] Specify that the corresponding asynchronous execution policy should be used

    Returns:

    The new sequential_task_execution_policy_shim

  2. template<typename Executor_> 
      rebind_executor< sequential_execution_policy_shim, Executor_, executor_parameters_type >::type 
      on(Executor_ && exec) const;

    Create a new sequential_execution_policy from the given executor

    [Note]Note

    Requires: is_executor<Executor>::value is true

    Parameters:

    exec

    [in] The executor to use for the execution of the parallel algorithm the returned execution policy is used with.

    Returns:

    The new sequential_execution_policy

  3. template<typename Parameters_> 
      rebind_executor< sequential_execution_policy_shim, executor_type, Parameters_ >::type 
      with(Parameters_ & params) const;

    Create a new sequential_execution_policy from the given execution parameters

    [Note]Note

    Requires: is_executor_parameters<Parameters>::value is true

    Parameters:

    params

    [in] The executor parameters to use for the execution of the parallel algorithm the returned execution policy is used with.

    Returns:

    The new sequential_execution_policy

  4. Executor & executor();
    Return the associated executor object.
  5. Executor const & executor() const;
    Return the associated executor object.
  6. Parameters & parameters();
    Return the associated executor parameters object.
  7. Parameters const & parameters() const;
    Return the associated executor parameters object.

Struct template rebind

hpx::parallel::v1::sequential_execution_policy_shim::rebind

Synopsis

// In header: <hpx/parallel/execution_policy.hpp>


template<typename Executor_, typename Parameters_> 
struct rebind {
  // types
  typedef sequential_execution_policy_shim< Executor_, Parameters_ > type;  // The type of the rebound execution policy. 
};

Description

Rebind the type of executor used by this execution policy. The execution category of Executor shall not be weaker than that of this execution policy


Struct parallel_task_execution_policy

hpx::parallel::v1::parallel_task_execution_policy

Synopsis

// In header: <hpx/parallel/execution_policy.hpp>


struct parallel_task_execution_policy {
  // types
  typedef parallel::parallel_executor      executor_type;             // The type of the executor associated with this execution policy. 
  typedef unspecified                      executor_parameters_type;
  typedef parallel::parallel_execution_tag execution_category;      

  // member classes/structs/unions
  template<typename Executor_, typename Parameters_> 
  struct rebind {
    // types
    typedef parallel_task_execution_policy_shim< Executor_, Parameters_ > type;  // The type of the rebound execution policy. 
  };

  // public member functions
  parallel_task_execution_policy operator()(task_execution_policy_tag) const;
  template<typename Executor> 
    rebind_executor< parallel_task_execution_policy, Executor, executor_parameters_type >::type 
    on(Executor &&) const;
  template<typename Parameters> 
    rebind_executor< parallel_task_execution_policy, executor_type, Parameters >::type 
    with(Parameters &&) const;

  // public static functions
  static executor_type & executor();
  static executor_parameters_type & parameters();
};

Description

Extension: The class parallel_task_execution_policy is an execution policy type used as a unique type to disambiguate parallel algorithm overloading and indicate that a parallel algorithm's execution may be parallelized.

The algorithm returns a future representing the result of the corresponding algorithm when invoked with the parallel_execution_policy.

parallel_task_execution_policy public types

  1. typedef unspecified executor_parameters_type;

    The type of the associated executor parameters object which is associated with this execution policy

  2. typedef parallel::parallel_execution_tag execution_category;

    The category of the execution agents created by this execution policy.

parallel_task_execution_policy public member functions

  1. parallel_task_execution_policy operator()(task_execution_policy_tag tag) const;

    Create a new parallel_task_execution_policy from itself

    Parameters:

    tag

    [in] Specify that the corresponding asynchronous execution policy should be used

    Returns:

    The new parallel_task_execution_policy

  2. template<typename Executor> 
      rebind_executor< parallel_task_execution_policy, Executor, executor_parameters_type >::type 
      on(Executor && exec) const;

    Create a new parallel_task_execution_policy from given executor

    [Note]Note

    Requires: is_executor<Executor>::value is true

    Parameters:

    exec

    [in] The executor to use for the execution of the parallel algorithm the returned execution policy is used with.

    Template Parameters:

    Executor

    The type of the executor to associate with this execution policy.

    Returns:

    The new parallel_task_execution_policy

  3. template<typename Parameters> 
      rebind_executor< parallel_task_execution_policy, executor_type, Parameters >::type 
      with(Parameters && params) const;

    Create a new parallel_task_execution_policy from the given execution parameters

    [Note]Note

    Requires: is_executor_parameters<Parameters>::value is true

    Parameters:

    params

    [in] The executor parameters to use for the execution of the parallel algorithm the returned execution policy is used with.

    Template Parameters:

    Parameters

    The type of the executor parameters to associate with this execution policy.

    Returns:

    The new parallel_task_execution_policy

parallel_task_execution_policy public static functions

  1. static executor_type & executor();
    Return the associated executor object.
  2. static executor_parameters_type & parameters();
    Return the associated executor parameters object.

Struct template rebind

hpx::parallel::v1::parallel_task_execution_policy::rebind

Synopsis

// In header: <hpx/parallel/execution_policy.hpp>


template<typename Executor_, typename Parameters_> 
struct rebind {
  // types
  typedef parallel_task_execution_policy_shim< Executor_, Parameters_ > type;  // The type of the rebound execution policy. 
};

Description

Rebind the type of executor used by this execution policy. The execution category of Executor shall not be weaker than that of this execution policy


Struct template parallel_task_execution_policy_shim

hpx::parallel::v1::parallel_task_execution_policy_shim

Synopsis

// In header: <hpx/parallel/execution_policy.hpp>

template<typename Executor, typename Parameters> 
struct parallel_task_execution_policy_shim :
  public hpx::parallel::v1::parallel_task_execution_policy
{
  // types
  typedef Executor                                             executor_type;             // The type of the executor associated with this execution policy. 
  typedef Parameters                                           executor_parameters_type;
  typedef executor_traits< executor_type >::execution_category execution_category;      

  // member classes/structs/unions
  template<typename Executor_, typename Parameters_> 
  struct rebind {
    // types
    typedef parallel_task_execution_policy_shim< Executor_, Parameters_ > type;  // The type of the rebound execution policy. 
  };

  // public member functions
  parallel_task_execution_policy_shim 
  operator()(task_execution_policy_tag) const;
  template<typename Executor_> 
    rebind_executor< parallel_task_execution_policy_shim, Executor_, executor_parameters_type >::type 
    on(Executor_ &&) const;
  template<typename Parameters_> 
    rebind_executor< parallel_task_execution_policy_shim, Executor, Parameters_ >::type 
    with(Parameters_ &&) const;
  Executor & executor();
  Executor const & executor() const;
  Parameters & parameters();
  Parameters const & parameters() const;
};

Description

Extension: The class parallel_task_execution_policy_shim is an execution policy type used as a unique type to disambiguate parallel algorithm overloading based on combining a underlying sequential_task_execution_policy and an executor and indicate that a parallel algorithm's execution may not be parallelized (has to run sequentially).

The algorithm returns a future representing the result of the corresponding algorithm when invoked with the sequential_execution_policy.

parallel_task_execution_policy_shim public types

  1. typedef Parameters executor_parameters_type;

    The type of the associated executor parameters object which is associated with this execution policy

  2. typedef executor_traits< executor_type >::execution_category execution_category;

    The category of the execution agents created by this execution policy.

parallel_task_execution_policy_shim public member functions

  1. parallel_task_execution_policy_shim 
    operator()(task_execution_policy_tag tag) const;

    Create a new parallel_task_execution_policy_shim from itself

    Parameters:

    tag

    [in] Specify that the corresponding asynchronous execution policy should be used

    Returns:

    The new sequential_task_execution_policy

  2. template<typename Executor_> 
      rebind_executor< parallel_task_execution_policy_shim, Executor_, executor_parameters_type >::type 
      on(Executor_ && exec) const;

    Create a new parallel_task_execution_policy from the given executor

    [Note]Note

    Requires: is_executor<Executor>::value is true

    Parameters:

    exec

    [in] The executor to use for the execution of the parallel algorithm the returned execution policy is used with.

    Returns:

    The new parallel_task_execution_policy

  3. template<typename Parameters_> 
      rebind_executor< parallel_task_execution_policy_shim, Executor, Parameters_ >::type 
      with(Parameters_ && params) const;

    Create a new parallel_task_execution_policy from the given execution parameters

    [Note]Note

    Requires: is_executor_parameters<Parameters>::value is true

    Parameters:

    params

    [in] The executor parameters to use for the execution of the parallel algorithm the returned execution policy is used with.

    Returns:

    The new parallel_task_execution_policy

  4. Executor & executor();
    Return the associated executor object.
  5. Executor const & executor() const;
    Return the associated executor object.
  6. Parameters & parameters();
    Return the associated executor parameters object.
  7. Parameters const & parameters() const;
    Return the associated executor parameters object.

Struct template rebind

hpx::parallel::v1::parallel_task_execution_policy_shim::rebind

Synopsis

// In header: <hpx/parallel/execution_policy.hpp>


template<typename Executor_, typename Parameters_> 
struct rebind {
  // types
  typedef parallel_task_execution_policy_shim< Executor_, Parameters_ > type;  // The type of the rebound execution policy. 
};

Description

Rebind the type of executor used by this execution policy. The execution category of Executor shall not be weaker than that of this execution policy


Struct parallel_execution_policy

hpx::parallel::v1::parallel_execution_policy

Synopsis

// In header: <hpx/parallel/execution_policy.hpp>


struct parallel_execution_policy {
  // types
  typedef parallel::parallel_executor      executor_type;             // The type of the executor associated with this execution policy. 
  typedef unspecified                      executor_parameters_type;
  typedef parallel::parallel_execution_tag execution_category;      

  // member classes/structs/unions
  template<typename Executor_, typename Parameters_> 
  struct rebind {
    // types
    typedef parallel_execution_policy_shim< Executor_, Parameters_ > type;  // The type of the rebound execution policy. 
  };

  // public member functions
  parallel_task_execution_policy operator()(task_execution_policy_tag) const;
  template<typename Executor> 
    rebind_executor< parallel_execution_policy, Executor, executor_parameters_type >::type 
    on(Executor &&) const;
  template<typename Parameters> 
    rebind_executor< parallel_execution_policy, executor_type, Parameters >::type 
    with(Parameters &&) const;

  // public static functions
  static executor_type & executor();
  static executor_parameters_type & parameters();
};

Description

The class parallel_execution_policy is an execution policy type used as a unique type to disambiguate parallel algorithm overloading and indicate that a parallel algorithm's execution may be parallelized.

parallel_execution_policy public types

  1. typedef unspecified executor_parameters_type;

    The type of the associated executor parameters object which is associated with this execution policy

  2. typedef parallel::parallel_execution_tag execution_category;

    The category of the execution agents created by this execution policy.

parallel_execution_policy public member functions

  1. parallel_task_execution_policy operator()(task_execution_policy_tag tag) const;

    Create a new parallel_execution_policy referencing a chunk size.

    Parameters:

    tag

    [in] Specify that the corresponding asynchronous execution policy should be used

    Returns:

    The new parallel_execution_policy

  2. template<typename Executor> 
      rebind_executor< parallel_execution_policy, Executor, executor_parameters_type >::type 
      on(Executor && exec) const;

    Create a new parallel_execution_policy referencing an executor and a chunk size.

    Parameters:

    exec

    [in] The executor to use for the execution of the parallel algorithm the returned execution policy is used with

    Returns:

    The new parallel_execution_policy

  3. template<typename Parameters> 
      rebind_executor< parallel_execution_policy, executor_type, Parameters >::type 
      with(Parameters && params) const;

    Create a new parallel_execution_policy from the given execution parameters

    [Note]Note

    Requires: is_executor_parameters<Parameters>::value is true

    Parameters:

    params

    [in] The executor parameters to use for the execution of the parallel algorithm the returned execution policy is used with.

    Template Parameters:

    Parameters

    The type of the executor parameters to associate with this execution policy.

    Returns:

    The new parallel_execution_policy

parallel_execution_policy public static functions

  1. static executor_type & executor();
    Return the associated executor object.
  2. static executor_parameters_type & parameters();
    Return the associated executor parameters object.

Struct template rebind

hpx::parallel::v1::parallel_execution_policy::rebind

Synopsis

// In header: <hpx/parallel/execution_policy.hpp>


template<typename Executor_, typename Parameters_> 
struct rebind {
  // types
  typedef parallel_execution_policy_shim< Executor_, Parameters_ > type;  // The type of the rebound execution policy. 
};

Description

Rebind the type of executor used by this execution policy. The execution category of Executor shall not be weaker than that of this execution policy


Struct template parallel_execution_policy_shim

hpx::parallel::v1::parallel_execution_policy_shim

Synopsis

// In header: <hpx/parallel/execution_policy.hpp>

template<typename Executor, typename Parameters> 
struct parallel_execution_policy_shim :
  public hpx::parallel::v1::parallel_execution_policy
{
  // types
  typedef Executor                                             executor_type;             // The type of the executor associated with this execution policy. 
  typedef Parameters                                           executor_parameters_type;
  typedef executor_traits< executor_type >::execution_category execution_category;      

  // member classes/structs/unions
  template<typename Executor_, typename Parameters_> 
  struct rebind {
    // types
    typedef parallel_execution_policy_shim< Executor_, Parameters_ > type;  // The type of the rebound execution policy. 
  };

  // public member functions
  parallel_task_execution_policy_shim< Executor, Parameters > 
  operator()(task_execution_policy_tag) const;
  template<typename Executor_> 
    rebind_executor< parallel_execution_policy_shim, Executor_, executor_parameters_type >::type 
    on(Executor_ &&) const;
  template<typename Parameters_> 
    rebind_executor< parallel_execution_policy_shim, Executor, Parameters_ >::type 
    with(Parameters_ &&) const;
  Executor & executor();
  Executor const & executor() const;
  Parameters & parameters();
  Parameters const & parameters() const;
};

Description

The class parallel_execution_policy is an execution policy type used as a unique type to disambiguate parallel algorithm overloading and indicate that a parallel algorithm's execution may be parallelized.

parallel_execution_policy_shim public types

  1. typedef Parameters executor_parameters_type;

    The type of the associated executor parameters object which is associated with this execution policy

  2. typedef executor_traits< executor_type >::execution_category execution_category;

    The category of the execution agents created by this execution policy.

parallel_execution_policy_shim public member functions

  1. parallel_task_execution_policy_shim< Executor, Parameters > 
    operator()(task_execution_policy_tag tag) const;

    Create a new parallel_execution_policy referencing a chunk size.

    Parameters:

    tag

    [in] Specify that the corresponding asynchronous execution policy should be used

    Returns:

    The new parallel_execution_policy

  2. template<typename Executor_> 
      rebind_executor< parallel_execution_policy_shim, Executor_, executor_parameters_type >::type 
      on(Executor_ && exec) const;

    Create a new parallel_execution_policy from the given executor

    [Note]Note

    Requires: is_executor<Executor>::value is true

    Parameters:

    exec

    [in] The executor to use for the execution of the parallel algorithm the returned execution policy is used with.

    Returns:

    The new parallel_execution_policy

  3. template<typename Parameters_> 
      rebind_executor< parallel_execution_policy_shim, Executor, Parameters_ >::type 
      with(Parameters_ && params) const;

    Create a new parallel_execution_policy from the given execution parameters

    [Note]Note

    Requires: is_executor_parameters<Parameters>::value is true

    Parameters:

    params

    [in] The executor parameters to use for the execution of the parallel algorithm the returned execution policy is used with.

    Returns:

    The new parallel_execution_policy

  4. Executor & executor();
    Return the associated executor object.
  5. Executor const & executor() const;
    Return the associated executor object.
  6. Parameters & parameters();
    Return the associated executor parameters object.
  7. Parameters const & parameters() const;
    Return the associated executor parameters object.

Struct template rebind

hpx::parallel::v1::parallel_execution_policy_shim::rebind

Synopsis

// In header: <hpx/parallel/execution_policy.hpp>


template<typename Executor_, typename Parameters_> 
struct rebind {
  // types
  typedef parallel_execution_policy_shim< Executor_, Parameters_ > type;  // The type of the rebound execution policy. 
};

Description

Rebind the type of executor used by this execution policy. The execution category of Executor shall not be weaker than that of this execution policy


Struct parallel_vector_execution_policy

hpx::parallel::v1::parallel_vector_execution_policy

Synopsis

// In header: <hpx/parallel/execution_policy.hpp>


struct parallel_vector_execution_policy {
  // types
  typedef parallel::parallel_executor      executor_type;             // The type of the executor associated with this execution policy. 
  typedef unspecified                      executor_parameters_type;
  typedef parallel::parallel_execution_tag execution_category;      

  // public member functions
  parallel_vector_execution_policy operator()(task_execution_policy_tag) const;

  // public static functions
  static executor_type & executor();
  static executor_parameters_type & parameters();
};

Description

The class parallel_vector_execution_policy is an execution policy type used as a unique type to disambiguate parallel algorithm overloading and indicate that a parallel algorithm's execution may be vectorized.

parallel_vector_execution_policy public types

  1. typedef unspecified executor_parameters_type;

    The type of the associated executor parameters object which is associated with this execution policy

  2. typedef parallel::parallel_execution_tag execution_category;

    The category of the execution agents created by this execution policy.

parallel_vector_execution_policy public member functions

  1. parallel_vector_execution_policy 
    operator()(task_execution_policy_tag tag) const;

    Create a new parallel_vector_execution_policy from itself

    Parameters:

    tag

    [in] Specify that the corresponding asynchronous execution policy should be used

    Returns:

    The new parallel_vector_execution_policy

parallel_vector_execution_policy public static functions

  1. static executor_type & executor();
    Return the associated executor object.
  2. static executor_parameters_type & parameters();
    Return the associated executor parameters object.

Struct template is_rebound_execution_policy

hpx::parallel::v1::is_rebound_execution_policy

Synopsis

// In header: <hpx/parallel/execution_policy.hpp>

template<typename T> 
struct is_rebound_execution_policy {
};

Struct template is_execution_policy

hpx::parallel::v1::is_execution_policy

Synopsis

// In header: <hpx/parallel/execution_policy.hpp>

template<typename T> 
struct is_execution_policy {
};

Description

  1. The type is_execution_policy can be used to detect execution policies for the purpose of excluding function signatures from otherwise ambiguous overload resolution participation.

  2. If T is the type of a standard or implementation-defined execution policy, is_execution_policy<T> shall be publicly derived from integral_constant<bool, true>, otherwise from integral_constant<bool, false>.

  3. The behavior of a program that adds specializations for is_execution_policy is undefined.


Struct template is_parallel_execution_policy

hpx::parallel::v1::is_parallel_execution_policy

Synopsis

// In header: <hpx/parallel/execution_policy.hpp>

template<typename T> 
struct is_parallel_execution_policy {
};

Description

Extension: Detect whether given execution policy enables parallelization

  1. The type is_parallel_execution_policy can be used to detect parallel execution policies for the purpose of excluding function signatures from otherwise ambiguous overload resolution participation.

  2. If T is the type of a standard or implementation-defined execution policy, is_parallel_execution_policy<T> shall be publicly derived from integral_constant<bool, true>, otherwise from integral_constant<bool, false>.

  3. The behavior of a program that adds specializations for is_parallel_execution_policy is undefined.


Struct template is_sequential_execution_policy

hpx::parallel::v1::is_sequential_execution_policy

Synopsis

// In header: <hpx/parallel/execution_policy.hpp>

template<typename T> 
struct is_sequential_execution_policy {
};

Description

Extension: Detect whether given execution policy does not enable parallelization

  1. The type is_sequential_execution_policy can be used to detect non-parallel execution policies for the purpose of excluding function signatures from otherwise ambiguous overload resolution participation.

  2. If T is the type of a standard or implementation-defined execution policy, is_sequential_execution_policy<T> shall be publicly derived from integral_constant<bool, true>, otherwise from integral_constant<bool, false>.

  3. The behavior of a program that adds specializations for is_sequential_execution_policy is undefined.


Struct template is_async_execution_policy

hpx::parallel::v1::is_async_execution_policy

Synopsis

// In header: <hpx/parallel/execution_policy.hpp>

template<typename T> 
struct is_async_execution_policy {
};

Description

Extension: Detect whether given execution policy makes algorithms asynchronous

  1. The type is_async_execution_policy can be used to detect asynchronous execution policies for the purpose of excluding function signatures from otherwise ambiguous overload resolution participation.

  2. If T is the type of a standard or implementation-defined execution policy, is_async_execution_policy<T> shall be publicly derived from integral_constant<bool, true>, otherwise from integral_constant<bool, false>.

  3. The behavior of a program that adds specializations for is_async_execution_policy is undefined.


Class execution_policy

hpx::parallel::v1::execution_policy

Synopsis

// In header: <hpx/parallel/execution_policy.hpp>


class execution_policy {
public:
  // construct/copy/destruct
  template<typename ExPolicy> 
    execution_policy(ExPolicy const &, 
                     typename std::enable_if< is_execution_policy< ExPolicy >::value &&!is_rebound_execution_policy< ExPolicy >::value, ExPolicy >::type * = 0);
  execution_policy(execution_policy &&);
  execution_policy(execution_policy const &);
  template<typename ExPolicy> 
    std::enable_if< is_execution_policy< ExPolicy >::value &&!is_rebound_execution_policy< ExPolicy >::value, execution_policy >::type & 
    operator=(ExPolicy const &);
  execution_policy & operator=(execution_policy &&);

  // public member functions
  execution_policy operator()(task_execution_policy_tag) const;
  launch launch_policy() const;
  std::type_info const & type() const;
  template<typename ExPolicy> ExPolicy * get();
  template<typename ExPolicy> ExPolicy const * get() const;
};

Description

An execution policy is an object that expresses the requirements on the ordering of functions invoked as a consequence of the invocation of a standard algorithm. Execution policies afford standard algorithms the discretion to execute in parallel.

  1. The class execution_policy is a dynamic container for execution policy objects. execution_policy allows dynamic control over standard algorithm execution.

  2. Objects of type execution_policy shall be constructible and assignable from objects of type T for which is_execution_policy<T>::value is true.

execution_policy public construct/copy/destruct

  1. template<typename ExPolicy> 
      execution_policy(ExPolicy const & policy, 
                       typename std::enable_if< is_execution_policy< ExPolicy >::value &&!is_rebound_execution_policy< ExPolicy >::value, ExPolicy >::type * = 0);

    Effects: Constructs an execution_policy object with a copy of exec's state Requires: is_execution_policy<T>::value is true

    Parameters:

    policy

    Specifies the inner execution policy

  2. execution_policy(execution_policy && policy);

    Move constructs a new execution_policy object.

    Parameters:

    policy

    Specifies the inner execution policy

  3. execution_policy(execution_policy const & rhs);

    Copy constructs a new execution_policy object.

    Parameters:

    rhs

    Specifies the inner execution policy

  4. template<typename ExPolicy> 
      std::enable_if< is_execution_policy< ExPolicy >::value &&!is_rebound_execution_policy< ExPolicy >::value, execution_policy >::type & 
      operator=(ExPolicy const & policy);

    Effects: Assigns a copy of exec's state to *this Returns: *this Requires: is_execution_policy<T>::value is true

    Parameters:

    policy

    Specifies the inner execution policy

  5. execution_policy & operator=(execution_policy && policy);

    Move assigns a new execution policy to the object.

    Parameters:

    policy

    Specifies the inner execution policy

execution_policy public member functions

  1. execution_policy operator()(task_execution_policy_tag tag) const;

    Extension: Create a new execution_policy holding the current policy made asynchronous.

    Parameters:

    tag

    [in] Specify that the corresponding asynchronous execution policy should be used

    Returns:

    The new execution_policy

  2. launch launch_policy() const;

    Extension: Retrieve default launch policy for this execution policy.

    Returns:

    The associated default launch policy

  3. std::type_info const & type() const;

    Returns: typeid(T), such that T is the type of the execution policy object contained by *this

  4. template<typename ExPolicy> ExPolicy * get();

    Returns: If target_type() == typeid(T), a pointer to the stored execution policy object; otherwise a null pointer Requires: is_execution_policy<T>::value is true

  5. template<typename ExPolicy> ExPolicy const * get() const;

    Returns: If target_type() == typeid(T), a pointer to the stored execution policy object; otherwise a null pointer Requires: is_execution_policy<T>::value is true


Global task

hpx::parallel::v1::task

Synopsis

// In header: <hpx/parallel/execution_policy.hpp>

static task_execution_policy_tag const task;

Description

The execution policy tag task can be used to create a execution policy which forces the given algorithm to be executed in an asynchronous way.


Global seq

hpx::parallel::v1::seq — Default sequential execution policy object.

Synopsis


Global par

hpx::parallel::v1::par — Default parallel execution policy object.

Synopsis


Global par_vec

hpx::parallel::v1::par_vec — Default vector execution policy object.

Synopsis

namespace hpx {
  namespace parallel {
    namespace v3 {
      struct auto_chunk_size;
    }
  }
}

Struct auto_chunk_size

hpx::parallel::v3::auto_chunk_size

Synopsis

// In header: <hpx/parallel/executors/auto_chunk_size.hpp>


struct auto_chunk_size {
  // construct/copy/destruct
  auto_chunk_size();
  explicit auto_chunk_size(util::steady_duration const &);
};

Description

Loop iterations are divided into pieces and then assigned to threads. The number of loop iterations combined is determined based on measurements of how long the execution of 1% of the overall number of iterations takes. This executor parameters type makes sure that as many loop iterations are combined as necessary to run for the amount of time specified.

auto_chunk_size public construct/copy/destruct

  1. auto_chunk_size();

    Construct an auto_chunk_size executor parameters object

    [Note]Note

    Default constructed auto_chunk_size executor parameter types will use 80 microseconds as the minimal time for which any of the scheduled chunks should run.

  2. explicit auto_chunk_size(util::steady_duration const & rel_time);

    Construct an auto_chunk_size executor parameters object

    Parameters:

    rel_time

    [in] The time duration to use as the minimum to decide how many loop iterations should be combined.

namespace hpx {
  namespace parallel {
    namespace v3 {
      struct dynamic_chunk_size;
    }
  }
}

Struct dynamic_chunk_size

hpx::parallel::v3::dynamic_chunk_size

Synopsis

Description

Loop iterations are divided into pieces of size chunk_size and then dynamically scheduled among the threads; when a thread finishes one chunk, it is dynamically assigned another If chunk_size is not specified, the default chunk size is 1.

[Note]Note

This executor parameters type is equivalent to OpenMPs DYNAMIC scheduling directive.

dynamic_chunk_size public construct/copy/destruct

  1. explicit dynamic_chunk_size(std::size_t chunk_size = 1);

    Construct a dynamic_chunk_size executor parameters object

    Parameters:

    chunk_size

    [in] The optional chunk size to use as the number of loop iterations to schedule together. The default chunk size is 1.

namespace hpx {
  namespace parallel {
    namespace v3 {
      struct sequential_executor_parameters;
      typedef Parameters executor_parameters_type;
      template<typename Executor> 
        bool variable_chunk_size(executor_parameters_type &, Executor &);
      template<typename Executor, typename F> 
        std::size_t get_chunk_size(executor_parameters_type &, Executor &, 
                                   F &&, std::size_t);
      template<typename Executor> 
        void reset_thread_distribution(executor_parameters_type &, Executor &);
      std::size_t processing_units_count(executor_parameters_type &);
    }
  }
}

Struct sequential_executor_parameters

hpx::parallel::v3::sequential_executor_parameters

Synopsis


Type definition executor_parameters_type

executor_parameters_type

Synopsis

// In header: <hpx/parallel/executors/executor_parameter_traits.hpp>


typedef Parameters executor_parameters_type;

Description

///////////////////////////////////////////////////////////////////////// template <typename Parameters, typename Enable> struct executor_parameter_traits { /** The type of the executor associated with this instance of executor_traits


Function template variable_chunk_size

hpx::parallel::v3::variable_chunk_size

Synopsis

// In header: <hpx/parallel/executors/executor_parameter_traits.hpp>


template<typename Executor> 
  bool variable_chunk_size(executor_parameters_type & params, Executor & exec);

Description

Returns whether the number of loop iterations to combine is different for each of the generated chunks.

[Note]Note

This calls params.variable_chunk_size(exec), if available, otherwise it returns false.

Parameters:

exec

[in] The executor object which will be used for scheduling of the tasks.

params

[in] The executor parameters object to use for determining whether the chunk size is variable.


Function template get_chunk_size

hpx::parallel::v3::get_chunk_size

Synopsis

// In header: <hpx/parallel/executors/executor_parameter_traits.hpp>


template<typename Executor, typename F> 
  std::size_t get_chunk_size(executor_parameters_type & params, 
                             Executor & exec, F && f, std::size_t num_tasks);

Description

Return the number of invocations of the given function f which should be combined into a single task

[Note]Note

The parameter f is expected to be a nullary function returning a std::size_t representing the number of iteration the function has already executed (i.e. which don't have to be scheduled anymore).

Parameters:

exec

[in] The executor object which will be used used for scheduling of the the loop iterations.

f

[in] The function which will be optionally scheduled using the given executor.

num_tasks

[in] The number of tasks the chunk size should be determined for

params

[in] The executor parameters object to use for determining the chunk size for the given number of tasks num_tasks.


Function template reset_thread_distribution

hpx::parallel::v3::reset_thread_distribution

Synopsis

// In header: <hpx/parallel/executors/executor_parameter_traits.hpp>


template<typename Executor> 
  void reset_thread_distribution(executor_parameters_type & params, 
                                 Executor & exec);

Description

Reset the internal round robin thread distribution scheme for the given executor.

[Note]Note

This calls params.reset_thread_distribution(exec) if it exists; otherwise it does nothing.

Parameters:

exec

[in] The executor object to use.

params

[in] The executor parameters object to use for resetting the thread distribution scheme.


Function processing_units_count

hpx::parallel::v3::processing_units_count

Synopsis

// In header: <hpx/parallel/executors/executor_parameter_traits.hpp>


std::size_t processing_units_count(executor_parameters_type & params);

Description

Retrieve the number of (kernel-)threads used by the associated executor.

[Note]Note

This calls exec.processing_units_count() if it exists; otherwise it forwards teh request to the executor parameters object.

Parameters:

params

[in] The executor parameters object to use as a fallback if the executor does not expose

namespace hpx {
  namespace parallel {
    namespace v3 {
      struct sequential_execution_tag;
      struct parallel_execution_tag;
      struct vector_execution_tag;
      template<typename Executor, typename Enable> struct executor_traits;
    }
  }
}

Struct sequential_execution_tag

hpx::parallel::v3::sequential_execution_tag

Synopsis

Description

Function invocations executed by a group of sequential execution agents execute in sequential order.


Struct parallel_execution_tag

hpx::parallel::v3::parallel_execution_tag

Synopsis

Description

Function invocations executed by a group of parallel execution agents execute in unordered fashion. Any such invocations executing in the same thread are indeterminately sequenced with respect to each other.


Struct vector_execution_tag

hpx::parallel::v3::vector_execution_tag

Synopsis

Description

Function invocations executed by a group of vector execution agents are permitted to execute in unordered fashion when executed in different threads, and un-sequenced with respect to one another when executed in the same thread.


Struct template executor_traits

hpx::parallel::v3::executor_traits

Synopsis

// In header: <hpx/parallel/executors/executor_traits.hpp>

template<typename Executor, typename Enable> 
struct executor_traits {
  // types
  typedef Executor    executor_type;     
  typedef unspecified execution_category;

  // member classes/structs/unions
  template<typename T> 
  struct future {
    // types
    typedef unspecified type;  // The future type returned from async_execute. 
  };

  // public static functions
  template<typename F> static void apply_execute(executor_type &, F &&);
  template<typename F> static auto async_execute(executor_type &, F &&);
  template<typename F> static auto execute(executor_type &, F &&);
  template<typename F, typename Shape> 
    static auto async_execute(executor_type &, F &&, Shape const &);
  template<typename F, typename Shape> 
    static auto execute(executor_type &, F &&, Shape const &);
};

Description

The executor_traits type is used to request execution agents from an executor. It is analogous to the interaction between containers and allocator_traits.

[Note]Note

For maximum implementation flexibility, executor_traits does not require executors to implement a particular exception reporting mechanism. Executors may choose whether or not to report exceptions, and if so, in what manner they are communicated back to the caller. However, we expect many executors to report exceptions in a manner consistent with the behavior of execution policies described by the Parallelism TS, where multiple exceptions are collected into an exception_list. This list would be reported through async_execute()'s returned future, or thrown directly by execute().

executor_traits public types

  1. typedef Executor executor_type;

    The type of the executor associated with this instance of executor_traits

  2. typedef unspecified execution_category;

    The category of agents created by the bulk-form execute() and async_execute()

    [Note]Note

    This evaluates to executor_type::execution_category if it exists; otherwise it evaluates to parallel_execution_tag.

executor_traits public static functions

  1. template<typename F> static void apply_execute(executor_type & exec, F && f);
    Singleton form of asynchronous fire & forget execution agent creation.

    This asynchronously (fire & forget) creates a single function invocation f() using the associated executor.

    [Note]Note

    This calls exec.apply_execute(f), if available, otherwise it calls exec.async_execute() while discarding the returned future

    Parameters:

    exec

    [in] The executor object to use for scheduling of the function f.

    f

    [in] The function which will be scheduled using the given executor.

  2. template<typename F> static auto async_execute(executor_type & exec, F && f);
    Singleton form of asynchronous execution agent creation.

    This asynchronously creates a single function invocation f() using the associated executor.

    [Note]Note

    Executors have to implement only async_execute(). All other functions will be emulated by this executor_traits in terms of this single basic primitive. However, some executors will naturally specialize all operations for maximum efficiency.

    This calls exec.async_execute(f)

    Parameters:

    exec

    [in] The executor object to use for scheduling of the function f.

    f

    [in] The function which will be scheduled using the given executor.

    Returns:

    f()'s result through a future

  3. template<typename F> static auto execute(executor_type & exec, F && f);
    Singleton form of synchronous execution agent creation.

    This synchronously creates a single function invocation f() using the associated executor. The execution of the supplied function synchronizes with the caller

    [Note]Note

    This calls exec.execute(f) if it exists; otherwise hpx::async(f).get()

    Parameters:

    exec

    [in] The executor object to use for scheduling of the function f.

    f

    [in] The function which will be scheduled using the given executor.

    Returns:

    f()'s result

  4. template<typename F, typename Shape> 
      static auto async_execute(executor_type & exec, F && f, Shape const & shape);
    Bulk form of asynchronous execution agent creation.

    This asynchronously creates a group of function invocations f(i) whose ordering is given by the execution_category associated with the executor.

    Here i takes on all values in the index space implied by shape. All exceptions thrown by invocations of f(i) are reported in a manner consistent with parallel algorithm execution through the returned future.

    [Note]Note

    This calls exec.async_execute(f, shape) if it exists; otherwise it executes hpx::async(f, i) as often as needed.

    Parameters:

    exec

    [in] The executor object to use for scheduling of the function f.

    f

    [in] The function which will be scheduled using the given executor.

    shape

    [in] The shape objects which defines the iteration boundaries for the arguments to be passed to f.

    Returns:

    The return type of executor_type::async_execute if defined by executor_type. Otherwise a vector of futures holding the returned value of each invocation of f.

  5. template<typename F, typename Shape> 
      static auto execute(executor_type & exec, F && f, Shape const & shape);
    Bulk form of synchronous execution agent creation.

    This synchronously creates a group of function invocations f(i) whose ordering is given by the execution_category associated with the executor. The function synchronizes the execution of all scheduled functions with the caller.

    Here i takes on all values in the index space implied by shape. All exceptions thrown by invocations of f(i) are reported in a manner consistent with parallel algorithm execution through the returned future.

    [Note]Note

    This calls exec.execute(f, shape) if it exists; otherwise it executes hpx::async(f, i) as often as needed.

    Parameters:

    exec

    [in] The executor object to use for scheduling of the function f.

    f

    [in] The function which will be scheduled using the given executor.

    shape

    [in] The shape objects which defines the iteration boundaries for the arguments to be passed to f.

    Returns:

    The return type of executor_type::execute if defined by executor_type. Otherwise a vector holding the returned value of each invocation of f except when f returns void, which case void is returned.

Struct template future

hpx::parallel::v3::executor_traits::future

Synopsis

// In header: <hpx/parallel/executors/executor_traits.hpp>


template<typename T> 
struct future {
  // types
  typedef unspecified type;  // The future type returned from async_execute. 
};

Description

The type of future returned by async_execute()

[Note]Note

This evaluates to executor_type::future_type<T> if it exists; otherwise it evaluates to hpx::future<T>

namespace hpx {
  namespace parallel {
    namespace v3 {
      struct guided_chunk_size;
    }
  }
}

Struct guided_chunk_size

hpx::parallel::v3::guided_chunk_size

Synopsis

Description

Iterations are dynamically assigned to threads in blocks as threads request them until no blocks remain to be assigned. Similar to dynamic_chunk_size except that the block size decreases each time a number of loop iterations is given to a thread. The size of the initial block is proportional to number_of_iterations / number_of_cores. Subsequent blocks are proportional to number_of_iterations_remaining / number_of_cores. The optional chunk size parameter defines the minimum block size. The default chunk size is 1.

[Note]Note

This executor parameters type is equivalent to OpenMPs GUIDED scheduling directive.

guided_chunk_size public construct/copy/destruct

  1. explicit guided_chunk_size(std::size_t min_chunk_size = 1);

    Construct a guided_chunk_size executor parameters object

    Parameters:

    min_chunk_size

    [in] The optional minimal chunk size to use as the minimal number of loop iterations to schedule together. The default minimal chunk size is 1.

namespace hpx {
  namespace parallel {
    namespace v3 {
      struct parallel_executor;
    }
  }
}

Struct parallel_executor

hpx::parallel::v3::parallel_executor

Synopsis

Description

A parallel_executor creates groups of parallel execution agents which execute in threads implicitly created by the executor. This executor prefers continuing with the creating thread first before executing newly created threads.

parallel_executor public types

  1. typedef auto_chunk_size executor_parameters_type;

    Associate the auto_chunk_size executor parameters type as a default with this executor.

parallel_executor public construct/copy/destruct

  1. explicit parallel_executor(launch l = launch::async);
    Create a new parallel executor.
namespace hpx {
  namespace parallel {
    namespace v3 {
      struct sequential_executor;
    }
  }
}

Struct sequential_executor

hpx::parallel::v3::sequential_executor

Synopsis

Description

A sequential_executor creates groups of sequential execution agents which execute in the calling thread. The sequential order is given by the lexicographical order of indices in the index space.

sequential_executor public construct/copy/destruct

  1. sequential_executor();
    Create a new sequential executor.
namespace hpx {
  namespace parallel {
    namespace v3 {
      struct service_executor;
    }
  }
}

Struct service_executor

hpx::parallel::v3::service_executor

Synopsis

// In header: <hpx/parallel/executors/service_executors.hpp>


struct service_executor {
  // types
  typedef static_chunk_size executor_parameters_type;

  // construct/copy/destruct
  service_executor(threads::executors::service_executor_type, 
                   char const * = "");
};

Description

A service_executor exposes one of the predefined HPX thread pools through an executor interface.

[Note]Note

All tasks executed by one of these executors will run on one of the OS-threads dedicated for the given thread pool. The tasks will not run a HPX-threads.

service_executor public types

  1. typedef static_chunk_size executor_parameters_type;

    Associate the static_chunk_size executor parameters type as a default with this executor.

service_executor public construct/copy/destruct

  1. service_executor(threads::executors::service_executor_type t, 
                     char const * name_suffix = "");

    Create a new service executor for the given HPX thread pool

    Parameters:

    name_suffix

    [in] The name suffix to use for the underlying thread pool

    t

    [in] Specifies the HPX thread pool to encapsulate

namespace hpx {
  namespace parallel {
    namespace v3 {
      struct static_chunk_size;
    }
  }
}

Struct static_chunk_size

hpx::parallel::v3::static_chunk_size

Synopsis

Description

Loop iterations are divided into pieces of size chunk_size and then assigned to threads. If chunk_size is not specified, the iterations are evenly (if possible) divided contiguously among the threads.

[Note]Note

This executor parameters type is equivalent to OpenMPs STATIC scheduling directive.

static_chunk_size public construct/copy/destruct

  1. static_chunk_size();

    Construct a static_chunk_size executor parameters object

    [Note]Note

    By default the number of loop iterations is determined from the number of available cores and the overall number of loop iterations to schedule.

  2. explicit static_chunk_size(std::size_t chunk_size);

    Construct a static_chunk_size executor parameters object

    Parameters:

    chunk_size

    [in] The optional chunk size to use as the number of loop iterations to run on a single thread.

namespace hpx {
  namespace parallel {
    namespace v3 {
      typedef threads::executors::local_priority_queue_executor local_priority_queue_executor;
    }
  }
}

Type definition local_priority_queue_executor

local_priority_queue_executor

Synopsis

// In header: <hpx/parallel/executors/thread_pool_executors.hpp>


typedef threads::executors::local_priority_queue_executor local_priority_queue_executor;

Description

Creates a new local_priority_queue_executor

namespace hpx {
  namespace parallel {
    namespace v3 {
      template<typename Executor, typename Enable> struct timed_executor_traits;
    }
  }
}

Struct template timed_executor_traits

hpx::parallel::v3::timed_executor_traits

Synopsis

// In header: <hpx/parallel/executors/timed_executor_traits.hpp>

template<typename Executor, typename Enable> 
struct timed_executor_traits :
  public hpx::parallel::v3::executor_traits< Executor >
{
  // types
  typedef executor_traits< Executor >::executor_type      executor_type;     
  typedef executor_traits< Executor >::execution_category execution_category;

  // member classes/structs/unions
  template<typename T> 
  struct future {
    // types
    typedef unspecified type;  // The future type returned from async_execute. 
  };

  // public static functions
  template<typename F> 
    static void apply_execute_at(executor_type &, 
                                 hpx::util::steady_time_point const &, F &&);
  template<typename F> 
    static void apply_execute_after(executor_type &, 
                                    hpx::util::steady_duration const &, F &&);
  template<typename F> 
    static auto async_execute_at(executor_type &, 
                                 hpx::util::steady_time_point const &, F &&);
  template<typename F> 
    static auto async_execute_after(executor_type &, 
                                    hpx::util::steady_duration const &, F &&);
  template<typename F> 
    static auto execute_at(executor_type &, 
                           hpx::util::steady_time_point const &, F &&);
  template<typename F> 
    static auto execute_after(executor_type &, 
                              hpx::util::steady_duration const &, F &&);
};

Description

The timed_executor_traits type is used to request execution agents from an executor. It is analogous to the interaction between containers and allocator_traits. The generated generated execution agents support timed scheduling functionality (in addition to what is supported execution agents generated using execution_traits type).

timed_executor_traits public types

  1. typedef executor_traits< Executor >::executor_type executor_type;

    The type of the executor associated with this instance of executor_traits

  2. typedef executor_traits< Executor >::execution_category execution_category;

    The category of agents created by the bulk-form execute() and async_execute()

    [Note]Note

    This evaluates to executor_type::execution_category if it exists; otherwise it evaluates to parallel_execution_tag.

timed_executor_traits public static functions

  1. template<typename F> 
      static void apply_execute_at(executor_type & exec, 
                                   hpx::util::steady_time_point const & abs_time, 
                                   F && f);
    Singleton form of asynchronous fire & forget execution agent creation supporting timed execution.

    This asynchronously (fire & forget) creates a single function invocation f() using the associated executor at the given point in time.

    [Note]Note

    This calls exec.apply_execute_at(abs_time, f), if available, otherwise it emulates timed scheduling by delaying calling exec.apply_execute() on the underlying non-scheduled execution agent while discarding the returned future.

    Parameters:

    abs_time

    [in] The point in time the given function should be scheduled at to run.

    exec

    [in] The executor object to use for scheduling of the function f.

    f

    [in] The function which will be scheduled using the given executor.

  2. template<typename F> 
      static void apply_execute_after(executor_type & exec, 
                                      hpx::util::steady_duration const & rel_time, 
                                      F && f);
    Singleton form of asynchronous fire & forget execution agent creation supporting timed execution.

    This asynchronously (fire & forget) creates a single function invocation f() using the associated executor after the given amount of time.

    [Note]Note

    This calls exec.apply_execute_at(abs_time, f), if available, otherwise it emulates timed scheduling by delaying calling exec.apply_execute() on the underlying non-scheduled execution agent while discarding the returned future.

    Parameters:

    exec

    [in] The executor object to use for scheduling of the function f.

    f

    [in] The function which will be scheduled using the given executor.

    rel_time

    [in] The duration of time after which the given function should be scheduled to run.

  3. template<typename F> 
      static auto async_execute_at(executor_type & exec, 
                                   hpx::util::steady_time_point const & abs_time, 
                                   F && f);
    Singleton form of asynchronous execution agent creation supporting timed execution.

    This asynchronously creates a single function invocation f() using the associated executor at the given point in time.

    [Note]Note

    This calls exec.async_execute_at(abs_time, f), if available, otherwise it emulates timed scheduling by delaying calling exec.async_execute() on the underlying non-scheduled execution agent.

    Parameters:

    abs_time

    [in] The point in time the given function should be scheduled at to run.

    exec

    [in] The executor object to use for scheduling of the function f.

    f

    [in] The function which will be scheduled using the given executor.

    Returns:

    f()'s result through a future

  4. template<typename F> 
      static auto async_execute_after(executor_type & exec, 
                                      hpx::util::steady_duration const & rel_time, 
                                      F && f);
    Singleton form of asynchronous execution agent creation supporting timed execution.

    This asynchronously creates a single function invocation f() using the associated executor after the given amount of time.

    [Note]Note

    This calls exec.async_execute_at(abs_time, f), if available, otherwise it emulates timed scheduling by delaying calling exec.async_execute() on the underlying non-scheduled execution agent.

    Parameters:

    exec

    [in] The executor object to use for scheduling of the function f.

    f

    [in] The function which will be scheduled using the given executor.

    rel_time

    [in] The duration of time after which the given function should be scheduled to run.

    Returns:

    f()'s result through a future

  5. template<typename F> 
      static auto execute_at(executor_type & exec, 
                             hpx::util::steady_time_point const & abs_time, 
                             F && f);
    Singleton form of synchronous execution agent creation supporting timed execution.

    This synchronously creates a single function invocation f() using the associated executor at the given point in time. The execution of the supplied function synchronizes with the caller.

    [Note]Note

    This calls exec.execute(f) if it exists; otherwise it emulates timed scheduling by delaying calling exec.execute() on the underlying non-scheduled execution agent.

    Parameters:

    abs_time

    [in] The point in time the given function should be scheduled at to run.

    exec

    [in] The executor object to use for scheduling of the function f.

    f

    [in] The function which will be scheduled using the given executor.

    Returns:

    f()'s result

  6. template<typename F> 
      static auto execute_after(executor_type & exec, 
                                hpx::util::steady_duration const & rel_time, 
                                F && f);
    Singleton form of synchronous execution agent creation supporting timed execution.

    This synchronously creates a single function invocation f() using the associated executor after the given amount of time. The execution of the supplied function synchronizes with the caller.

    [Note]Note

    This calls exec.execute(f) if it exists; otherwise it emulates timed scheduling by delaying calling exec.execute() on the underlying non-scheduled execution agent.

    Parameters:

    exec

    [in] The executor object to use for scheduling of the function f.

    f

    [in] The function which will be scheduled using the given executor.

    rel_time

    [in] The duration of time after which the given function should be scheduled to run.

    Returns:

    f()'s result

Struct template future

hpx::parallel::v3::timed_executor_traits::future

Synopsis

// In header: <hpx/parallel/executors/timed_executor_traits.hpp>


template<typename T> 
struct future {
  // types
  typedef unspecified type;  // The future type returned from async_execute. 
};

Description

The type of future returned by async_execute()

[Note]Note

This evaluates to executor_type::future_type<T> if it exists; otherwise it evaluates to hpx::future<T>

namespace hpx {
  namespace parallel {
    namespace v2 {
      class task_canceled_exception;
      template<typename ExPolicy = parallel::parallel_execution_policy> 
        class task_block;
      template<typename ExPolicy, typename F> 
        unspecified define_task_block(ExPolicy &&, F &&);
      template<typename F> void define_task_block(F &&);
      template<typename ExPolicy, typename F> 
        unspecified define_task_block_restore_thread(ExPolicy &&, F &&);
      template<typename F> void define_task_block_restore_thread(F &&);
    }
  }
}

Class task_canceled_exception

hpx::parallel::v2::task_canceled_exception

Synopsis

// In header: <hpx/parallel/task_block.hpp>


class task_canceled_exception : public hpx::exception {
public:
  // construct/copy/destruct
  task_canceled_exception();
};

Description

The class task_canceled_exception defines the type of objects thrown by task_block::run or task_block::wait if they detect that an exception is pending within the current parallel region.

task_canceled_exception public construct/copy/destruct

  1. task_canceled_exception();

Class template task_block

hpx::parallel::v2::task_block

Synopsis

// In header: <hpx/parallel/task_block.hpp>

template<typename ExPolicy = parallel::parallel_execution_policy> 
class task_block {
public:
  // types
  typedef ExPolicy execution_policy;

  // public member functions
  execution_policy const & get_execution_policy() const;
  template<typename F> void run(F &&);
  template<typename Executor, typename F> void run(Executor &, F &&);
  void wait();
  ExPolicy & policy();
  ExPolicy const & policy() const;
};

Description

The class task_block defines an interface for forking and joining parallel tasks. The \a define_task_block and \a define_task_block_restore_thread function templates create an object of type task_block and pass a reference to that object to a user-provided callable object. An object of class \a task_block cannot be constructed, destroyed, copied, or moved except by the implementation of the task region library. Taking the address of a task_block object via operator& or addressof is ill formed. The result of obtaining its address by any other means is unspecified. A \a task_block is active if it was created by the nearest enclosing task block, where "task block" refers to an invocation of define_task_block or define_task_block_restore_thread and "nearest enclosing" means the most recent invocation that has not yet completed. Code designated for execution in another thread by means other than the facilities in this section (e.g., using thread or async) are not enclosed in the task region and a task_block passed to (or captured by) such code is not active within that code. Performing any operation on a task_block that is not active results in undefined behavior. The \a task_block that is active before a specific call to the run member function is not active within the asynchronous function that invoked run. (The invoked function should not, therefore, capture the \a task_block from the surrounding block.) @code Example: define_task_block([&](auto& tr) { tr.run([&] { tr.run([] { f(); }); // Error: tr is not active define_task_block([&](auto& tr) { // Nested task block tr.run(f); // OK: inner tr is active /// ... }); }); /// ... }); \tparam ExPolicy The execution policy an instance of a \a task_block was created with. This defaults to \a parallel_execution_policy.

task_block public types

  1. typedef ExPolicy execution_policy;

    Refers to the type of the execution policy used to create the task_block.

task_block public member functions

  1. execution_policy const & get_execution_policy() const;

    Return the execution policy instance used to create this task_block

  2. template<typename F> void run(F && f);

    Causes the expression f() to be invoked asynchronously. The invocation of f is permitted to run on an unspecified thread in an unordered fashion relative to the sequence of operations following the call to run(f) (the continuation), or indeterminately sequenced within the same thread as the continuation.

    The call to run synchronizes with the invocation of f. The completion of f() synchronizes with the next invocation of wait on the same task_block or completion of the nearest enclosing task block (i.e., the define_task_block or define_task_block_restore_thread that created this task block).

    Requires: F shall be MoveConstructible. The expression, (void)f(), shall be well-formed.

    Precondition: this shall be the active task_block.

    Postconditions: A call to run may return on a different thread than that on which it was called.

    [Note]Note

    The call to run is sequenced before the continuation as if run returns on the same thread. The invocation of the user-supplied callable object f may be immediate or may be delayed until compute resources are available. run might or might not return before invocation of f completes.

    Throws:

    This function may throw task_canceled_exception, as described in Exception Handling.
  3. template<typename Executor, typename F> void run(Executor & exec, F && f);

    Causes the expression f() to be invoked asynchronously using the given executor. The invocation of f is permitted to run on an unspecified thread associated with the given executor and in an unordered fashion relative to the sequence of operations following the call to run(exec, f) (the continuation), or indeterminately sequenced within the same thread as the continuation.

    The call to run synchronizes with the invocation of f. The completion of f() synchronizes with the next invocation of wait on the same task_block or completion of the nearest enclosing task block (i.e., the define_task_block or define_task_block_restore_thread that created this task block).

    Requires: Executor shall be a type modeling the Executor concept. F shall be MoveConstructible. The expression, (void)f(), shall be well-formed.

    Precondition: this shall be the active task_block.

    Postconditions: A call to run may return on a different thread than that on which it was called.

    [Note]Note

    The call to run is sequenced before the continuation as if run returns on the same thread. The invocation of the user-supplied callable object f may be immediate or may be delayed until compute resources are available. run might or might not return before invocation of f completes.

    Throws:

    This function may throw task_canceled_exception, as described in Exception Handling.
  4. void wait();

    Blocks until the tasks spawned using this task_block have finished. Precondition: this shall be the active task_block. Postcondition: All tasks spawned by the nearest enclosing task region have finished. A call to wait may return on a different thread than that on which it was called. \note The call to \a wait is sequenced before the continuation as if \a wait returns on the same thread. \throw This function may throw \a task_canceled_exception, as described in Exception Handling. @code Example: define_task_block([&](auto& tr) { tr.run([&]{ process(a, w, x); }); // Process a[w] through a[x] if (y < x) tr.wait(); // Wait if overlap between [w, x) and [y, z) process(a, y, z); // Process a[y] through a[z] });

  5. ExPolicy & policy();

    Returns a reference to the execution policy used to construct this object.

    Precondition: this shall be the active task_block.

  6. ExPolicy const & policy() const;

    Returns a reference to the execution policy used to construct this object.

    Precondition: this shall be the active task_block.


Function template define_task_block

hpx::parallel::v2::define_task_block

Synopsis

// In header: <hpx/parallel/task_block.hpp>


template<typename ExPolicy, typename F> 
  unspecified define_task_block(ExPolicy && policy, F && f);

Description

Constructs a task_block, tr, using the given execution policy policy,and invokes the expression f(tr) on the user-provided object, f.

Postcondition: All tasks spawned from f have finished execution. A call to define_task_block may return on a different thread than that on which it was called.

[Note]Note

It is expected (but not mandated) that f will (directly or indirectly) call tr.run(callable_object).

Parameters:

f

The user defined function to invoke inside the task block. Given an lvalue tr of type task_block, the expression, (void)f(tr), shall be well-formed.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the task block may be parallelized.

F

The type of the user defined function to invoke inside the define_task_block (deduced). F shall be MoveConstructible.

Throws:

An exception_list, as specified in Exception Handling.

Function template define_task_block

hpx::parallel::v2::define_task_block

Synopsis

// In header: <hpx/parallel/task_block.hpp>


template<typename F> void define_task_block(F && f);

Description

Constructs a task_block, tr, and invokes the expression f(tr) on the user-provided object, f. This version uses parallel_execution_policy for task scheduling.

Postcondition: All tasks spawned from f have finished execution. A call to define_task_block may return on a different thread than that on which it was called.

[Note]Note

It is expected (but not mandated) that f will (directly or indirectly) call tr.run(callable_object).

Parameters:

f

The user defined function to invoke inside the task block. Given an lvalue tr of type task_block, the expression, (void)f(tr), shall be well-formed.

Template Parameters:

F

The type of the user defined function to invoke inside the define_task_block (deduced). F shall be MoveConstructible.

Throws:

An exception_list, as specified in Exception Handling.

Function template define_task_block_restore_thread

hpx::parallel::v2::define_task_block_restore_thread

Synopsis

// In header: <hpx/parallel/task_block.hpp>


template<typename ExPolicy, typename F> 
  unspecified define_task_block_restore_thread(ExPolicy && policy, F && f);

Description

Constructs a task_block, tr, and invokes the expression f(tr) on the user-provided object, f.

Postcondition: All tasks spawned from f have finished execution. A call to define_task_block_restore_thread always returns on the same thread as that on which it was called.

[Note]Note

It is expected (but not mandated) that f will (directly or indirectly) call tr.run(callable_object).

Parameters:

f

The user defined function to invoke inside the define_task_block. Given an lvalue tr of type task_block, the expression, (void)f(tr), shall be well-formed.

policy

The execution policy to use for the scheduling of the iterations.

Template Parameters:

ExPolicy

The type of the execution policy to use (deduced). It describes the manner in which the execution of the task block may be parallelized.

F

The type of the user defined function to invoke inside the define_task_block (deduced). F shall be MoveConstructible.

Throws:

An exception_list, as specified in Exception Handling.

Function template define_task_block_restore_thread

hpx::parallel::v2::define_task_block_restore_thread

Synopsis

// In header: <hpx/parallel/task_block.hpp>


template<typename F> void define_task_block_restore_thread(F && f);

Description

Constructs a task_block, tr, and invokes the expression f(tr) on the user-provided object, f. This version uses parallel_execution_policy for task scheduling.

Postcondition: All tasks spawned from f have finished execution. A call to define_task_block_restore_thread always returns on the same thread as that on which it was called.

[Note]Note

It is expected (but not mandated) that f will (directly or indirectly) call tr.run(callable_object).

Parameters:

f

The user defined function to invoke inside the define_task_block. Given an lvalue tr of type task_block, the expression, (void)f(tr), shall be well-formed.

Template Parameters:

F

The type of the user defined function to invoke inside the define_task_block (deduced). F shall be MoveConstructible.

Throws:

An exception_list, as specified in Exception Handling.
namespace hpx {
  namespace performance_counters {
    counter_status 
    install_counter_type(std::string const &, 
                         hpx::util::function_nonser< boost::int64_t(bool)> const &, 
                         std::string const & = "", std::string const & = "", 
                         error_code & = throws);
    void install_counter_type(std::string const &, counter_type, 
                              error_code & = throws);
    counter_status 
    install_counter_type(std::string const &, counter_type, 
                         std::string const &, std::string const & = "", 
                         boost::uint32_t = HPX_PERFORMANCE_COUNTER_V1, 
                         error_code & = throws);
    counter_status 
    install_counter_type(std::string const &, counter_type, 
                         std::string const &, create_counter_func const &, 
                         discover_counters_func const &, 
                         boost::uint32_t = HPX_PERFORMANCE_COUNTER_V1, 
                         std::string const & = "", error_code & = throws);
  }
}

Function install_counter_type

hpx::performance_counters::install_counter_type — Install a new generic performance counter type in a way, which will uninstall it automatically during shutdown.

Synopsis

// In header: <hpx/performance_counters/manage_counter_type.hpp>


counter_status 
install_counter_type(std::string const & name, 
                     hpx::util::function_nonser< boost::int64_t(bool)> const & counter_value, 
                     std::string const & helptext = "", 
                     std::string const & uom = "", error_code & ec = throws);

Description

The function install_counter_type will register a new generic counter type based on the provided function. The counter type will be automatically unregistered during system shutdown. Any consumer querying any instance of this this counter type will cause the provided function to be called and the returned value to be exposed as the counter value.

The counter type is registered such that there can be one counter instance per locality. The expected naming scheme for the counter instances is: '/objectname{locality#<*>/total}/countername' where '<*>' is a zero based integer identifying the locality the counter is created on.

[Note]Note

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

[Note]Note

The counter type registry is a locality based service. You will have to register each counter type on every locality where a corresponding performance counter will be created.

Parameters:

counter_value

[in] The function to call whenever the counter value is requested by a consumer.

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

helptext

[in, optional] A longer descriptive text shown to the user to explain the nature of the counters created from this type.

name

[in] The global virtual name of the counter type. This name is expected to have the format /objectname/countername.

uom

[in] The unit of measure for the new performance counter type.

Returns:

If successful, this function returns status_valid_data, otherwise it will either throw an exception or return an error_code from the enum counter_status (also, see note related to parameter ec).


Function install_counter_type

hpx::performance_counters::install_counter_type — Install a new performance counter type in a way, which will uninstall it automatically during shutdown.

Synopsis

// In header: <hpx/performance_counters/manage_counter_type.hpp>


void install_counter_type(std::string const & name, counter_type type, 
                          error_code & ec = throws);

Description

The function install_counter_type will register a new counter type based on the provided counter_type_info. The counter type will be automatically unregistered during system shutdown.

[Note]Note

The counter type registry is a locality based service. You will have to register each counter type on every locality where a corresponding performance counter will be created.

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

name

[in] The global virtual name of the counter type. This name is expected to have the format /objectname/countername.

type

[in] The type of the counters of this counter_type.

Returns:

If successful, this function returns status_valid_data, otherwise it will either throw an exception or return an error_code from the enum counter_status (also, see note related to parameter ec).


Function install_counter_type

hpx::performance_counters::install_counter_type — Install a new performance counter type in a way, which will uninstall it automatically during shutdown.

Synopsis

// In header: <hpx/performance_counters/manage_counter_type.hpp>


counter_status 
install_counter_type(std::string const & name, counter_type type, 
                     std::string const & helptext, 
                     std::string const & uom = "", 
                     boost::uint32_t version = HPX_PERFORMANCE_COUNTER_V1, 
                     error_code & ec = throws);

Description

The function install_counter_type will register a new counter type based on the provided counter_type_info. The counter type will be automatically unregistered during system shutdown.

[Note]Note

The counter type registry is a locality based service. You will have to register each counter type on every locality where a corresponding performance counter will be created.

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

helptext

[in] A longer descriptive text shown to the user to explain the nature of the counters created from this type.

name

[in] The global virtual name of the counter type. This name is expected to have the format /objectname/countername.

type

[in] The type of the counters of this counter_type.

uom

[in] The unit of measure for the new performance counter type.

version

[in] The version of the counter type. This is currently expected to be set to HPX_PERFORMANCE_COUNTER_V1.

Returns:

If successful, this function returns status_valid_data, otherwise it will either throw an exception or return an error_code from the enum counter_status (also, see note related to parameter ec).


Function install_counter_type

hpx::performance_counters::install_counter_type — Install a new generic performance counter type in a way, which will uninstall it automatically during shutdown.

Synopsis

// In header: <hpx/performance_counters/manage_counter_type.hpp>


counter_status 
install_counter_type(std::string const & name, counter_type type, 
                     std::string const & helptext, 
                     create_counter_func const & create_counter, 
                     discover_counters_func const & discover_counters, 
                     boost::uint32_t version = HPX_PERFORMANCE_COUNTER_V1, 
                     std::string const & uom = "", error_code & ec = throws);

Description

The function install_counter_type will register a new generic counter type based on the provided counter_type_info. The counter type will be automatically unregistered during system shutdown.

[Note]Note

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

[Note]Note

The counter type registry is a locality based service. You will have to register each counter type on every locality where a corresponding performance counter will be created.

Parameters:

create_counter

[in] The function which will be called to create a new instance of this counter type.

discover_counters

[in] The function will be called to discover counter instances which can be created.

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

helptext

[in] A longer descriptive text shown to the user to explain the nature of the counters created from this type.

name

[in] The global virtual name of the counter type. This name is expected to have the format /objectname/countername.

type

[in] The type of the counters of this counter_type.

uom

[in] The unit of measure of the counter type (default: "")

version

[in] The version of the counter type. This is currently expected to be set to HPX_PERFORMANCE_COUNTER_V1.

Returns:

If successful, this function returns status_valid_data, otherwise it will either throw an exception or return an error_code from the enum counter_status (also, see note related to parameter ec).


HPX_REGISTER_ACTION_DECLARATION(...)
HPX_REGISTER_ACTION(...)
HPX_REGISTER_ACTION_ID(action, actionname, actionid)

Macro HPX_REGISTER_ACTION_DECLARATION

HPX_REGISTER_ACTION_DECLARATION — Declare the necessary component action boilerplate code. / / The macro HPX_REGISTER_ACTION_DECLARATION can be used to declare all the / boilerplate code which is required for proper functioning of component / actions in the context of HPX. / / The parameter action is the type of the action to declare the / boilerplate for. / / This macro can be invoked with an optional second parameter. This parameter / specifies a unique name of the action to be used for serialization purposes. / The second parameter has to be specified if the first parameter is not / usable as a plain (non-qualified) C++ identifier, i.e. the first parameter / contains special characters which cannot be part of a C++ identifier, such / as '<', '>', or ':'. / /.

Synopsis

// In header: <hpx/runtime/actions/basic_action.hpp>

HPX_REGISTER_ACTION_DECLARATION(...)

Description

/ / /

Example: / /




















/// 

/ /

[Note]Note

This macro has to be used once for each of the component actions / defined using one of the HPX_DEFINE_COMPONENT_ACTION macros. It has to / be visible in all translation units using the action, thus it is / recommended to place it into the header file defining the component. #define HPX_REGISTER_ACTION_DECLARATION(...)

/*


Macro HPX_REGISTER_ACTION

HPX_REGISTER_ACTION — Define the necessary component action boilerplate code.

Synopsis

// In header: <hpx/runtime/actions/basic_action.hpp>

HPX_REGISTER_ACTION(...)

Description

The macro HPX_REGISTER_ACTION can be used to define all the boilerplate code which is required for proper functioning of component actions in the context of HPX.

The parameter action is the type of the action to define the boilerplate for.

This macro can be invoked with an optional second parameter. This parameter specifies a unique name of the action to be used for serialization purposes. The second parameter has to be specified if the first parameter is not usable as a plain (non-qualified) C++ identifier, i.e. the first parameter contains special characters which cannot be part of a C++ identifier, such as '<', '>', or ':'.

[Note]Note

This macro has to be used once for each of the component actions defined using one of the HPX_DEFINE_COMPONENT_ACTION or HPX_DEFINE_PLAIN_ACTION macros. It has to occur exactly once for each of the actions, thus it is recommended to place it into the source file defining the component.

Only one of the forms of this macro HPX_REGISTER_ACTION or HPX_REGISTER_ACTION_ID should be used for a particular action, never both.


Macro HPX_REGISTER_ACTION_ID

HPX_REGISTER_ACTION_ID — Define the necessary component action boilerplate code and assign a predefined unique id to the action.

Synopsis

// In header: <hpx/runtime/actions/basic_action.hpp>

HPX_REGISTER_ACTION_ID(action, actionname, actionid)

Description

The macro HPX_REGISTER_ACTION can be used to define all the boilerplate code which is required for proper functioning of component actions in the context of HPX.

The parameter action is the type of the action to define the boilerplate for.

The parameter actionname specifies an unique name of the action to be used for serialization purposes. The second parameter has to be usable as a plain (non-qualified) C++ identifier, it should not contain special characters which cannot be part of a C++ identifier, such as '<', '>', or ':'.

The parameter actionid specifies an unique integer value which will be used to represent the action during serialization.

[Note]Note

This macro has to be used once for each of the component actions defined using one of the HPX_DEFINE_COMPONENT_ACTION or global actions HPX_DEFINE_PLAIN_ACTION macros. It has to occur exactly once for each of the actions, thus it is recommended to place it into the source file defining the component.

Only one of the forms of this macro HPX_REGISTER_ACTION or HPX_REGISTER_ACTION_ID should be used for a particular action, never both.


HPX_DEFINE_COMPONENT_ACTION(...)

Macro HPX_DEFINE_COMPONENT_ACTION

HPX_DEFINE_COMPONENT_ACTION — Registers a member function of a component as an action type with HPX.

Synopsis

// In header: <hpx/runtime/actions/component_action.hpp>

HPX_DEFINE_COMPONENT_ACTION(...)

Description

The macro HPX_DEFINE_COMPONENT_ACTION can be used to register a member function of a component as an action type named action_type.

The parameter component is the type of the component exposing the member function func which should be associated with the newly defined action type. The parameter action_type is the name of the action type to register with HPX.

Example: 

namespace app
{
    // Define a simple component exposing one action 'print_greeting'
    class HPX_COMPONENT_EXPORT server
      : public hpx::components::simple_component_base<server>
    {
        void print_greeting() const
        {
            hpx::cout << "Hey, how are you?\n" << hpx::flush;
        }

        // Component actions need to be declared, this also defines the
        // type 'print_greeting_action' representing the action.
        HPX_DEFINE_COMPONENT_ACTION(server, print_greeting,
            print_greeting_action);
    };
}

The first argument must provide the type name of the component the action is defined for.

The second argument must provide the member function name the action should wrap.

[Note]Note

The macro HPX_DEFINE_COMPONENT_ACTION can be used with 2 or 3 arguments. The third argument is optional.

The default value for the third argument (the typename of the defined action) is derived from the name of the function (as passed as the second argument) by appending '_action'. The third argument can be omitted only if the first argument with an appended suffix '_action' resolves to a valid, unqualified C++ type name.


HPX_DEFINE_PLAIN_ACTION(...)
HPX_PLAIN_ACTION(...)
HPX_PLAIN_ACTION_ID(func, name, id)

Macro HPX_DEFINE_PLAIN_ACTION

HPX_DEFINE_PLAIN_ACTION — Defines a plain action type.

Synopsis

// In header: <hpx/runtime/actions/plain_action.hpp>

HPX_DEFINE_PLAIN_ACTION(...)

Description

Example: 

namespace app
{
    void some_global_function(double d)
    {
        cout << d;
    }

    // This will define the action type 'app::some_global_action' which
    // represents the function 'app::some_global_function'.
    HPX_DEFINE_PLAIN_ACTION(some_global_function, some_global_action);
}
[Note]Note

Usually this macro will not be used in user code unless the intend is to avoid defining the action_type in global namespace. Normally, the use of the macro HPX_PLAIN_ACTION is recommend.

The macro HPX_DEFINE_PLAIN_ACTION can be used with 1 or 2 arguments. The second argument is optional. The default value for the second argument (the typename of the defined action) is derived from the name of the function (as passed as the first argument) by appending '_action'. The second argument can be omitted only if the first argument with an appended suffix '_action' resolves to a valid, unqualified C++ type name.


Macro HPX_PLAIN_ACTION

HPX_PLAIN_ACTION — */

Synopsis

// In header: <hpx/runtime/actions/plain_action.hpp>

HPX_PLAIN_ACTION(...)

Description

Defines a plain action type based on the given function func and registers it with HPX.

The macro HPX_PLAIN_ACTION can be used to define a plain action (e.g. an action encapsulating a global or free function) based on the given function func. It defines the action type name representing the given function. This macro additionally registers the newly define action type with HPX.

The parameter func is a global or free (non-member) function which should be encapsulated into a plain action. The parameter name is the name of the action type defined by this macro.

Example: 

namespace app
{
    void some_global_function(double d)
    {
        cout << d;
    }
}

// This will define the action type 'some_global_action' which represents
// the function 'app::some_global_function'.
HPX_PLAIN_ACTION(app::some_global_function, some_global_action);
[Note]Note

The macro HPX_PLAIN_ACTION has to be used at global namespace even if the wrapped function is located in some other namespace. The newly defined action type is placed into the global namespace as well.

The macro HPX_PLAIN_ACTION_ID can be used with 1, 2, or 3 arguments. The second and third arguments are optional. The default value for the second argument (the typename of the defined action) is derived from the name of the function (as passed as the first argument) by appending '_action'. The second argument can be omitted only if the first argument with an appended suffix '_action' resolves to a valid, unqualified C++ type name. The default value for the third argument is hpx::components::factory_check.

Only one of the forms of this macro HPX_PLAIN_ACTION or HPX_PLAIN_ACTION_ID should be used for a particular action, never both.


Macro HPX_PLAIN_ACTION_ID

HPX_PLAIN_ACTION_ID — Defines a plain action type based on the given function func and registers it with HPX.

Synopsis

// In header: <hpx/runtime/actions/plain_action.hpp>

HPX_PLAIN_ACTION_ID(func, name, id)

Description

The macro HPX_PLAIN_ACTION_ID can be used to define a plain action (e.g. an action encapsulating a global or free function) based on the given function func. It defines the action type actionname representing the given function. The parameter actionid

The parameter actionid specifies an unique integer value which will be used to represent the action during serialization.

The parameter func is a global or free (non-member) function which should be encapsulated into a plain action. The parameter name is the name of the action type defined by this macro.

The second parameter has to be usable as a plain (non-qualified) C++ identifier, it should not contain special characters which cannot be part of a C++ identifier, such as '<', '>', or ':'.

Example: 

namespace app
{
    void some_global_function(double d)
    {
        cout << d;
    }
}

// This will define the action type 'some_global_action' which represents
// the function 'app::some_global_function'.
HPX_PLAIN_ACTION_ID(app::some_global_function, some_global_action,
  some_unique_id);
[Note]Note

The macro HPX_PLAIN_ACTION_ID has to be used at global namespace even if the wrapped function is located in some other namespace. The newly defined action type is placed into the global namespace as well.

Only one of the forms of this macro HPX_PLAIN_ACTION or HPX_PLAIN_ACTION_ID should be used for a particular action, never both.

namespace hpx {
  namespace agas {

    enum service_mode { service_mode_invalid =  -1, 
                        service_mode_bootstrap =  0, service_mode_hosted =  1 };
  }
}
namespace hpx {
  namespace applier {
    applier & get_applier();
    applier * get_applier_ptr();
  }
}namespace applier {
}

Function get_applier

hpx::applier::get_applier

Synopsis

// In header: <hpx/runtime/applier_fwd.hpp>


applier & get_applier();

Description

The function get_applier returns a reference to the (thread specific) applier instance.

namespace hpx {
  std::vector< hpx::future< hpx::id_type > > 
  find_all_from_basename(std::string, std::size_t);
  std::vector< hpx::future< hpx::id_type > > 
  find_from_basename(std::string, std::vector< std::size_t > const &);
  hpx::future< hpx::id_type > 
  find_from_basename(std::string, std::size_t = ~0U);
  hpx::future< bool > 
  register_with_basename(std::string, hpx::id_type, std::size_t = ~0U);
  template<typename Client, typename Stub> 
    hpx::future< bool > 
    register_with_basename(std::string, hpx::future< hpx::id_type >, 
                           std::size_t = ~0U);
  template<typename Client, typename Stub> 
    hpx::future< bool > 
    register_with_basename(std::string, 
                           components::client_base< Client, Stub > &, 
                           std::size_t = ~0U);
  hpx::future< hpx::id_type > 
  unregister_with_basename(std::string, std::size_t = ~0U);
}

Function find_all_from_basename

hpx::find_all_from_basename

Synopsis

// In header: <hpx/runtime/basename_registration.hpp>


std::vector< hpx::future< hpx::id_type > > 
find_all_from_basename(std::string base_name, std::size_t num_ids);

Description

Return all registered ids from all localities from the given base name.

This function locates all ids which were registered with the given base name. It returns a list of futures representing those ids.

[Note]Note

The futures will become ready even if the event (for instance, binding the name to an id) has already happened in the past. This is important in order to reliably retrieve ids from a name, even if the name was already registered.

Return all registered clients from all localities from the given base name.

This function locates all ids which were registered with the given base name. It returns a list of futures representing those ids.

[Note]Note

The futures embedded in the returned client objects will become ready even if the event (for instance, binding the name to an id) has already happened in the past. This is important in order to reliably retrieve ids from a name, even if the name was already registered.

Parameters:

base_name

[in] The base name for which to retrieve the registered ids.

[in] The base name for which to retrieve the registered ids.

num_ids

[in] The number of registered ids to expect.

[in] The number of registered ids to expect.

Returns:

A list of futures representing the ids which were registered using the given base name.

Returns:

A list of futures representing the ids which were registered using the given base name.


Function find_from_basename

hpx::find_from_basename

Synopsis

// In header: <hpx/runtime/basename_registration.hpp>


std::vector< hpx::future< hpx::id_type > > 
find_from_basename(std::string base_name, 
                   std::vector< std::size_t > const & ids);

Description

Return registered ids from the given base name and sequence numbers.

This function locates the ids which were registered with the given base name and the given sequence numbers. It returns a list of futures representing those ids.

[Note]Note

The futures will become ready even if the event (for instance, binding the name to an id) has already happened in the past. This is important in order to reliably retrieve ids from a name, even if the name was already registered.

Return registered clients from the given base name and sequence numbers.

This function locates the ids which were registered with the given base name and the given sequence numbers. It returns a list of futures representing those ids.

[Note]Note

The futures embedded in the returned client objects will become ready even if the event (for instance, binding the name to an id) has already happened in the past. This is important in order to reliably retrieve ids from a name, even if the name was already registered.

Parameters:

base_name

[in] The base name for which to retrieve the registered ids.

[in] The base name for which to retrieve the registered ids.

ids

[in] The sequence numbers of the registered ids.

[in] The sequence numbers of the registered ids.

Returns:

A list of futures representing the ids which were registered using the given base name and sequence numbers.

Returns:

A list of futures representing the ids which were registered using the given base name and sequence numbers.


Function find_from_basename

hpx::find_from_basename — Return registered id from the given base name and sequence number.

Synopsis

// In header: <hpx/runtime/basename_registration.hpp>


hpx::future< hpx::id_type > 
find_from_basename(std::string base_name, std::size_t sequence_nr = ~0U);

Description

This function locates the id which was registered with the given base name and the given sequence number. It returns a future representing those id.

[Note]Note

The future will become ready even if the event (for instance, binding the name to an id) has already happened in the past. This is important in order to reliably retrieve ids from a name, even if the name was already registered.

This function locates the id which was registered with the given base name and the given sequence number. It returns a future representing those id.

[Note]Note

The future embedded in the returned client object will become ready even if the event (for instance, binding the name to an id) has already happened in the past. This is important in order to reliably retrieve ids from a name, even if the name was already registered.

Parameters:

base_name

[in] The base name for which to retrieve the registered ids.

[in] The base name for which to retrieve the registered ids.

sequence_nr

[in] The sequence number of the registered id.

[in] The sequence number of the registered id.

Returns:

A representing the id which was registered using the given base name and sequence numbers.

Returns:

A representing the id which was registered using the given base name and sequence numbers.


Function register_with_basename

hpx::register_with_basename — Register the given id using the given base name.

Synopsis

// In header: <hpx/runtime/basename_registration.hpp>


hpx::future< bool > 
register_with_basename(std::string base_name, hpx::id_type id, 
                       std::size_t sequence_nr = ~0U);

Description

The function registers the given ids using the provided base name.

[Note]Note

The operation will fail if the given sequence number is not unique.

Parameters:

base_name

[in] The base name for which to retrieve the registered ids.

id

[in] The id to register using the given base name.

sequence_nr

[in, optional] The sequential number to use for the registration of the id. This number has to be unique system wide for each registration using the same base name. The default is the current locality identifier. Also, the sequence numbers have to be consecutive starting from zero.

Returns:

A future representing the result of the registration operation itself.


Function template register_with_basename

hpx::register_with_basename

Synopsis

// In header: <hpx/runtime/basename_registration.hpp>


template<typename Client, typename Stub> 
  hpx::future< bool > 
  register_with_basename(std::string base_name, hpx::future< hpx::id_type > f, 
                         std::size_t sequence_nr = ~0U);

Description

Register the id wrapped in the given future using the given base name.

The function registers the object the given future refers to using the provided base name.

[Note]Note

The operation will fail if the given sequence number is not unique.

Parameters:

base_name

[in] The base name for which to retrieve the registered ids.

f

[in] The future which should be registered using the given base name.

sequence_nr

[in, optional] The sequential number to use for the registration of the id. This number has to be unique system wide for each registration using the same base name. The default is the current locality identifier. Also, the sequence numbers have to be consecutive starting from zero.

Returns:

A future representing the result of the registration operation itself.


Function template register_with_basename

hpx::register_with_basename

Synopsis

// In header: <hpx/runtime/basename_registration.hpp>


template<typename Client, typename Stub> 
  hpx::future< bool > 
  register_with_basename(std::string base_name, 
                         components::client_base< Client, Stub > & client, 
                         std::size_t sequence_nr = ~0U);

Description

Register the id wrapped in the given client using the given base name.

The function registers the object the given client refers to using the provided base name.

[Note]Note

The operation will fail if the given sequence number is not unique.

Parameters:

base_name

[in] The base name for which to retrieve the registered ids.

client

[in] The client which should be registered using the given base name.

sequence_nr

[in, optional] The sequential number to use for the registration of the id. This number has to be unique system wide for each registration using the same base name. The default is the current locality identifier. Also, the sequence numbers have to be consecutive starting from zero.

Template Parameters:

Client

The client type to register

Returns:

A future representing the result of the registration operation itself.


Function unregister_with_basename

hpx::unregister_with_basename — Unregister the given id using the given base name.

Synopsis

// In header: <hpx/runtime/basename_registration.hpp>


hpx::future< hpx::id_type > 
unregister_with_basename(std::string base_name, std::size_t sequence_nr = ~0U);

Description

The function unregisters the given ids using the provided base name.

Unregister the given base name.

The function unregisters the given ids using the provided base name.

Parameters:

base_name

[in] The base name for which to retrieve the registered ids.

[in] The base name for which to retrieve the registered ids.

sequence_nr

[in, optional] The sequential number to use for the un-registration. This number has to be the same as has been used with register_with_basename before.

[in, optional] The sequential number to use for the un-registration. This number has to be the same as has been used with register_with_basename before.

Returns:

A future representing the result of the un-registration operation itself.

Returns:

A future representing the result of the un-registration operation itself.

namespace hpx {
  namespace components {
    struct binpacking_distribution_policy;

    static char const *const default_binpacking_counter_name;
    static binpacking_distribution_policy const binpacked;
  }
}

Struct binpacking_distribution_policy

hpx::components::binpacking_distribution_policy

Synopsis

// In header: <hpx/runtime/components/binpacking_distribution_policy.hpp>


struct binpacking_distribution_policy {
  // construct/copy/destruct
  binpacking_distribution_policy();

  // public member functions
  binpacking_distribution_policy 
  operator()(std::vector< id_type > const &, 
             char const * = default_binpacking_counter_name) const;
  binpacking_distribution_policy 
  operator()(std::vector< id_type > &&, 
             char const * = default_binpacking_counter_name) const;
  binpacking_distribution_policy 
  operator()(id_type const &, char const * = default_binpacking_counter_name) const;
  template<typename Component, typename... Ts> 
    hpx::future< hpx::id_type > create(Ts &&...) const;
  template<typename Component, typename... Ts> 
    hpx::future< std::vector< bulk_locality_result > > 
    bulk_create(std::size_t, Ts &&...) const;
  std::string const & get_counter_name() const;
  std::size_t get_num_localities() const;
};

Description

This class specifies the parameters for a binpacking distribution policy to use for creating a given number of items on a given set of localities. The binpacking policy will distribute the new objects in a way such that each of the localities will equalize the number of overall objects of this type based on a given criteria (by default this criteria is the overall number of objects of this type).

binpacking_distribution_policy public construct/copy/destruct

  1. binpacking_distribution_policy();

    Default-construct a new instance of a binpacking_distribution_policy. This policy will represent one locality (the local locality).

binpacking_distribution_policy public member functions

  1. binpacking_distribution_policy 
    operator()(std::vector< id_type > const & locs, 
               char const * counter_name = default_binpacking_counter_name) const;

    Create a new default_distribution policy representing the given set of localities.

    Parameters:

    counter_name

    [in] The name of the performance counter which should be used as the distribution criteria (by default the overall number of existing instances of the given component type will be used).

    locs

    [in] The list of localities the new instance should represent

  2. binpacking_distribution_policy 
    operator()(std::vector< id_type > && locs, 
               char const * counter_name = default_binpacking_counter_name) const;
  3. binpacking_distribution_policy 
    operator()(id_type const & loc, 
               char const * counter_name = default_binpacking_counter_name) const;

    Create a new default_distribution policy representing the given locality

    Parameters:

    counter_name

    [in] The name of the performance counter which should be used as the distribution criteria (by default the overall number of existing instances of the given component type will be used).

    loc

    [in] The locality the new instance should represent

  4. template<typename Component, typename... Ts> 
      hpx::future< hpx::id_type > create(Ts &&... vs) const;

    Create one object on one of the localities associated by this policy instance

    Parameters:

    vs

    [in] The arguments which will be forwarded to the constructor of the new object.

    Returns:

    A future holding the global address which represents the newly created object

  5. template<typename Component, typename... Ts> 
      hpx::future< std::vector< bulk_locality_result > > 
      bulk_create(std::size_t count, Ts &&... vs) const;

    Create multiple objects on the localities associated by this policy instance

    Parameters:

    count

    [in] The number of objects to create

    vs

    [in] The arguments which will be forwarded to the constructors of the new objects.

    Returns:

    A future holding the list of global addresses which represent the newly created objects

  6. std::string const & get_counter_name() const;

    Returns the name of the performance counter associated with this policy instance.

  7. std::size_t get_num_localities() const;

    Returns the number of associated localities for this distribution policy

    [Note]Note

    This function is part of the creation policy implemented by this class


Global default_binpacking_counter_name

hpx::components::default_binpacking_counter_name

Synopsis

// In header: <hpx/runtime/components/binpacking_distribution_policy.hpp>

static char const *const default_binpacking_counter_name;

Global binpacked

hpx::components::binpacked

Synopsis

Description

A predefined instance of the binpacking distribution_policy. It will represent the local locality and will place all items to create here.

namespace hpx {
  namespace components {
    struct colocating_distribution_policy;

    static colocating_distribution_policy const colocated;
  }
}

Struct colocating_distribution_policy

hpx::components::colocating_distribution_policy

Synopsis

// In header: <hpx/runtime/components/colocating_distribution_policy.hpp>


struct colocating_distribution_policy {
  // construct/copy/destruct
  colocating_distribution_policy();

  // public member functions
  colocating_distribution_policy operator()(id_type const &) const;
  template<typename Client, typename Stub> 
    colocating_distribution_policy 
    operator()(client_base< Client, Stub > const &) const;
  template<typename Component, typename... Ts> 
    hpx::future< hpx::id_type > create(Ts &&...) const;
  template<typename Component, typename... Ts> 
    hpx::future< std::vector< bulk_locality_result > > 
    bulk_create(std::size_t, Ts &&...) const;
  template<typename Action, typename... Ts> 
    hpx::future< typename traits::promise_local_result< typename hpx::actions::extract_action< Action >::remote_result_type >::type > 
    async(launch, Ts &&...) const;
  template<typename Action, typename Callback, typename... Ts> 
    hpx::future< typename traits::promise_local_result< typename hpx::actions::extract_action< Action >::remote_result_type >::type > 
    async_cb(launch, Callback &&, Ts &&...) const;
  template<typename Action, typename Continuation, typename... Ts> 
    bool apply(Continuation &&, threads::thread_priority, Ts &&...) const;
  template<typename Action, typename... Ts> 
    bool apply(threads::thread_priority, Ts &&...) const;
  template<typename Action, typename Continuation, typename Callback, 
           typename... Ts> 
    bool apply_cb(Continuation &&, threads::thread_priority, Callback &&, 
                  Ts &&...) const;
  template<typename Action, typename Callback, typename... Ts> 
    bool apply_cb(threads::thread_priority, Callback &&, Ts &&...) const;
  std::size_t get_num_localities() const;
  hpx::id_type get_next_target() const;
};

Description

This class specifies the parameters for a distribution policy to use for creating a given number of items on the locality where a given object is currently placed.

colocating_distribution_policy public construct/copy/destruct

  1. colocating_distribution_policy();

    Default-construct a new instance of a colocating_distribution_policy. This policy will represent the local locality.

colocating_distribution_policy public member functions

  1. colocating_distribution_policy operator()(id_type const & id) const;

    Create a new colocating_distribution_policy representing the locality where the given object os current located

    Parameters:

    id

    [in] The global address of the object with which the new instances should be colocated on

  2. template<typename Client, typename Stub> 
      colocating_distribution_policy 
      operator()(client_base< Client, Stub > const & client) const;

    Create a new colocating_distribution_policy representing the locality where the given object os current located

    Parameters:

    client

    [in] The client side representation of the object with which the new instances should be colocated on

  3. template<typename Component, typename... Ts> 
      hpx::future< hpx::id_type > create(Ts &&... vs) const;

    Create one object on the locality of the object this distribution policy instance is associated with

    [Note]Note

    This function is part of the placement policy implemented by this class

    Parameters:

    vs

    [in] The arguments which will be forwarded to the constructor of the new object.

    Returns:

    A future holding the global address which represents the newly created object

  4. template<typename Component, typename... Ts> 
      hpx::future< std::vector< bulk_locality_result > > 
      bulk_create(std::size_t count, Ts &&... vs) const;

    Create multiple objects colocated with the object represented by this policy instance

    [Note]Note

    This function is part of the placement policy implemented by this class

    Parameters:

    count

    [in] The number of objects to create

    vs

    [in] The arguments which will be forwarded to the constructors of the new objects.

    Returns:

    A future holding the list of global addresses which represent the newly created objects

  5. template<typename Action, typename... Ts> 
      hpx::future< typename traits::promise_local_result< typename hpx::actions::extract_action< Action >::remote_result_type >::type > 
      async(launch policy, Ts &&... vs) const;
    [Note]Note

    This function is part of the invocation policy implemented by this class

  6. template<typename Action, typename Callback, typename... Ts> 
      hpx::future< typename traits::promise_local_result< typename hpx::actions::extract_action< Action >::remote_result_type >::type > 
      async_cb(launch policy, Callback && cb, Ts &&... vs) const;
    [Note]Note

    This function is part of the invocation policy implemented by this class

  7. template<typename Action, typename Continuation, typename... Ts> 
      bool apply(Continuation && c, threads::thread_priority priority, 
                 Ts &&... vs) const;
    [Note]Note

    This function is part of the invocation policy implemented by this class

  8. template<typename Action, typename... Ts> 
      bool apply(threads::thread_priority priority, Ts &&... vs) const;
  9. template<typename Action, typename Continuation, typename Callback, 
             typename... Ts> 
      bool apply_cb(Continuation && c, threads::thread_priority priority, 
                    Callback && cb, Ts &&... vs) const;
    [Note]Note

    This function is part of the invocation policy implemented by this class

  10. template<typename Action, typename Callback, typename... Ts> 
      bool apply_cb(threads::thread_priority priority, Callback && cb, 
                    Ts &&... vs) const;
  11. std::size_t get_num_localities() const;

    Returns the number of associated localities for this distribution policy

    [Note]Note

    This function is part of the creation policy implemented by this class

  12. hpx::id_type get_next_target() const;

    Returns the locality which is anticipated to be used for the next async operation


Global colocated

hpx::components::colocated

Synopsis

Description

A predefined instance of the co-locating distribution_policy. It will represent the local locality and will place all items to create here.


HPX_REGISTER_COMPONENT(type, name, mode)

Macro HPX_REGISTER_COMPONENT

HPX_REGISTER_COMPONENT — Define a component factory for a component type.

Synopsis

// In header: <hpx/runtime/components/component_factory.hpp>

HPX_REGISTER_COMPONENT(type, name, mode)

Description

This macro is used create and to register a minimal component factory for a component type which allows it to be remotely created using the hpx::new_<> function.

This macro can be invoked with one, two or three arguments

Parameters:

mode

The mode parameter has to be one of the defined enumeration values of the enumeration hpx::components::factory_state_enum. The default for this parameter is hpx::components::factory_enabled.

name

The name parameter specifies the name to use to register the factory. This should uniquely (system-wide) identify the component type. The name parameter must conform to the C++ identifier rules (without any namespace). If this parameter is not given, the first parameter is used.

type

The type parameter is a (fully decorated) type of the component type for which a factory should be defined.

namespace hpx {
  namespace components {
    template<typename Component> 
      future< naming::id_type > copy(naming::id_type const &);
    template<typename Component> 
      future< naming::id_type > 
      copy(naming::id_type const &, naming::id_type const &);
    template<typename Derived, typename Stub> 
      Derived copy(client_base< Derived, Stub > const &, 
                   naming::id_type const & = naming::invalid_id);
  }
}

Function template copy

hpx::components::copy — Copy given component to the specified target locality.

Synopsis

// In header: <hpx/runtime/components/copy_component.hpp>


template<typename Component> 
  future< naming::id_type > copy(naming::id_type const & to_copy);

Description

The function copy<Component> will create a copy of the component referenced by to_copy on the locality specified with target_locality. It returns a future referring to the newly created component instance.

[Note]Note

The new component instance is created on the locality of the component instance which is to be copied.

Parameters:

to_copy

[in] The global id of the component to copy

Returns:

A future representing the global id of the newly (copied) component instance.


Function template copy

hpx::components::copy — Copy given component to the specified target locality.

Synopsis

// In header: <hpx/runtime/components/copy_component.hpp>


template<typename Component> 
  future< naming::id_type > 
  copy(naming::id_type const & to_copy, 
       naming::id_type const & target_locality);

Description

The function copy<Component> will create a copy of the component referenced by to_copy on the locality specified with target_locality. It returns a future referring to the newly created component instance.

Parameters:

target_locality

[in ] The locality where the copy should be created.

to_copy

[in] The global id of the component to copy

Returns:

A future representing the global id of the newly (copied) component instance.


Function template copy

hpx::components::copy — Copy given component to the specified target locality.

Synopsis

// In header: <hpx/runtime/components/copy_component.hpp>


template<typename Derived, typename Stub> 
  Derived copy(client_base< Derived, Stub > const & to_copy, 
               naming::id_type const & target_locality = naming::invalid_id);

Description

The function copy will create a copy of the component referenced by the client side object to_copy on the locality specified with target_locality. It returns a new client side object future referring to the newly created component instance.

[Note]Note

If the second argument is omitted (or is invalid_id) the new component instance is created on the locality of the component instance which is to be copied.

Parameters:

target_locality

[in, optional] The locality where the copy should be created (default is same locality as source).

to_copy

[in] The client side object representing the component to copy

Returns:

A future representing the global id of the newly (copied) component instance.

namespace hpx {
  namespace components {
    struct default_distribution_policy;

    static default_distribution_policy const default_layout;
  }
}

Struct default_distribution_policy

hpx::components::default_distribution_policy

Synopsis

// In header: <hpx/runtime/components/default_distribution_policy.hpp>


struct default_distribution_policy {
  // construct/copy/destruct
  default_distribution_policy();

  // public member functions
  default_distribution_policy operator()(std::vector< id_type > const &) const;
  default_distribution_policy operator()(std::vector< id_type > &&) const;
  default_distribution_policy operator()(id_type const &) const;
  template<typename Component, typename... Ts> 
    hpx::future< hpx::id_type > create(Ts &&...) const;
  template<typename Component, typename... Ts> 
    hpx::future< std::vector< bulk_locality_result > > 
    bulk_create(std::size_t, Ts &&...) const;
  template<typename Action, typename... Ts> 
    hpx::future< typename traits::promise_local_result< typename hpx::actions::extract_action< Action >::remote_result_type >::type > 
    async(launch, Ts &&...) const;
  template<typename Action, typename Callback, typename... Ts> 
    hpx::future< typename traits::promise_local_result< typename hpx::actions::extract_action< Action >::remote_result_type >::type > 
    async_cb(launch, Callback &&, Ts &&...) const;
  template<typename Action, typename Continuation, typename... Ts> 
    bool apply(Continuation &&, threads::thread_priority, Ts &&...) const;
  template<typename Action, typename... Ts> 
    bool apply(threads::thread_priority, Ts &&...) const;
  template<typename Action, typename Continuation, typename Callback, 
           typename... Ts> 
    bool apply_cb(Continuation &&, threads::thread_priority, Callback &&, 
                  Ts &&...) const;
  template<typename Action, typename Callback, typename... Ts> 
    bool apply_cb(threads::thread_priority, Callback &&, Ts &&...) const;
  std::size_t get_num_localities() const;
  hpx::id_type get_next_target() const;
};

Description

/** This class specifies the parameters for a simple distribution policy to use for creating (and evenly distributing) a given number of items on a given set of localities.

default_distribution_policy public construct/copy/destruct

  1. default_distribution_policy();

    Default-construct a new instance of a default_distribution_policy. This policy will represent one locality (the local locality).

default_distribution_policy public member functions

  1. default_distribution_policy 
    operator()(std::vector< id_type > const & locs) const;

    Create a new default_distribution policy representing the given set of localities.

    Parameters:

    locs

    [in] The list of localities the new instance should represent

  2. default_distribution_policy operator()(std::vector< id_type > && locs) const;
  3. default_distribution_policy operator()(id_type const & loc) const;

    Create a new default_distribution policy representing the given locality

    Parameters:

    loc

    [in] The locality the new instance should represent

  4. template<typename Component, typename... Ts> 
      hpx::future< hpx::id_type > create(Ts &&... vs) const;

    Create one object on one of the localities associated by this policy instance

    [Note]Note

    This function is part of the placement policy implemented by this class

    Parameters:

    vs

    [in] The arguments which will be forwarded to the constructor of the new object.

    Returns:

    A future holding the global address which represents the newly created object

  5. template<typename Component, typename... Ts> 
      hpx::future< std::vector< bulk_locality_result > > 
      bulk_create(std::size_t count, Ts &&... vs) const;

    Create multiple objects on the localities associated by this policy instance

    [Note]Note

    This function is part of the placement policy implemented by this class

    Parameters:

    count

    [in] The number of objects to create

    vs

    [in] The arguments which will be forwarded to the constructors of the new objects.

    Returns:

    A future holding the list of global addresses which represent the newly created objects

  6. template<typename Action, typename... Ts> 
      hpx::future< typename traits::promise_local_result< typename hpx::actions::extract_action< Action >::remote_result_type >::type > 
      async(launch policy, Ts &&... vs) const;
    [Note]Note

    This function is part of the invocation policy implemented by this class

  7. template<typename Action, typename Callback, typename... Ts> 
      hpx::future< typename traits::promise_local_result< typename hpx::actions::extract_action< Action >::remote_result_type >::type > 
      async_cb(launch policy, Callback && cb, Ts &&... vs) const;
    [Note]Note

    This function is part of the invocation policy implemented by this class

  8. template<typename Action, typename Continuation, typename... Ts> 
      bool apply(Continuation && c, threads::thread_priority priority, 
                 Ts &&... vs) const;
    [Note]Note

    This function is part of the invocation policy implemented by this class

  9. template<typename Action, typename... Ts> 
      bool apply(threads::thread_priority priority, Ts &&... vs) const;
  10. template<typename Action, typename Continuation, typename Callback, 
             typename... Ts> 
      bool apply_cb(Continuation && c, threads::thread_priority priority, 
                    Callback && cb, Ts &&... vs) const;
    [Note]Note

    This function is part of the invocation policy implemented by this class

  11. template<typename Action, typename Callback, typename... Ts> 
      bool apply_cb(threads::thread_priority priority, Callback && cb, 
                    Ts &&... vs) const;
  12. std::size_t get_num_localities() const;

    Returns the number of associated localities for this distribution policy

    [Note]Note

    This function is part of the creation policy implemented by this class

  13. hpx::id_type get_next_target() const;

    Returns the locality which is anticipated to be used for the next async operation


Global default_layout

hpx::components::default_layout

Synopsis

Description

A predefined instance of the default distribution_policy. It will represent the local locality and will place all items to create here.

namespace hpx {
  namespace components {
    template<typename Component, typename DistPolicy> 
      future< naming::id_type > 
      migrate(naming::id_type const &, DistPolicy const &);
    template<typename Derived, typename Stub, typename DistPolicy> 
      Derived migrate(client_base< Derived, Stub > const &, 
                      DistPolicy const &);
    template<typename Component> 
      future< naming::id_type > 
      migrate(naming::id_type const &, naming::id_type const &);
    template<typename Derived, typename Stub> 
      Derived migrate(client_base< Derived, Stub > const &, 
                      naming::id_type const &);
  }
}

Function template migrate

hpx::components::migrate

Synopsis

// In header: <hpx/runtime/components/migrate_component.hpp>


template<typename Component, typename DistPolicy> 
  future< naming::id_type > 
  migrate(naming::id_type const & to_migrate, DistPolicy const & policy);

Description

Migrate the given component to the specified target locality

The function migrate<Component> will migrate the component referenced by to_migrate to the locality specified with target_locality. It returns a future referring to the migrated component instance.

Parameters:

policy

[in] A distribution policy which will be used to determine the locality to migrate this object to.

to_migrate

[in] The client side representation of the component to migrate.

Template Parameters:

Component

Specifies the component type of the component to migrate.

DistPolicy

Specifies the distribution policy to use to determine the destination locality.

Returns:

A future representing the global id of the migrated component instance. This should be the same as migrate_to.


Function template migrate

hpx::components::migrate

Synopsis

// In header: <hpx/runtime/components/migrate_component.hpp>


template<typename Derived, typename Stub, typename DistPolicy> 
  Derived migrate(client_base< Derived, Stub > const & to_migrate, 
                  DistPolicy const & policy);

Description

Migrate the given component to the specified target locality

The function migrate<Component> will migrate the component referenced by to_migrate to the locality specified with target_locality. It returns a future referring to the migrated component instance.

Parameters:

policy

[in] A distribution policy which will be used to determine the locality to migrate this object to.

to_migrate

[in] The client side representation of the component to migrate.

Template Parameters:

Derived

Specifies the component type of the component to migrate.

DistPolicy

Specifies the distribution policy to use to determine the destination locality.

Returns:

A future representing the global id of the migrated component instance. This should be the same as migrate_to.


Function template migrate

hpx::components::migrate

Synopsis

// In header: <hpx/runtime/components/migrate_component.hpp>


template<typename Component> 
  future< naming::id_type > 
  migrate(naming::id_type const & to_migrate, 
          naming::id_type const & target_locality);

Description

Migrate the component with the given id to the specified target locality

The function migrate<Component> will migrate the component referenced by to_migrate to the locality specified with target_locality. It returns a future referring to the migrated component instance.

Parameters:

target_locality

[in] The locality where the component should be migrated to.

to_migrate

[in] The global id of the component to migrate.

Template Parameters:

Component

Specifies the component type of the component to migrate.

Returns:

A future representing the global id of the migrated component instance. This should be the same as migrate_to.


Function template migrate

hpx::components::migrate

Synopsis

// In header: <hpx/runtime/components/migrate_component.hpp>


template<typename Derived, typename Stub> 
  Derived migrate(client_base< Derived, Stub > const & to_migrate, 
                  naming::id_type const & target_locality);

Description

Migrate the given component to the specified target locality

The function migrate<Component> will migrate the component referenced by to_migrate to the locality specified with target_locality. It returns a future referring to the migrated component instance.

Parameters:

target_locality

[in] The id of the locality to migrate this object to.

to_migrate

[in] The client side representation of the component to migrate.

Template Parameters:

Derived

Specifies the component type of the component to migrate.

Returns:

A client side representation of representing of the migrated component instance. This should be the same as migrate_to.

namespace hpx {
  template<typename Component, typename... Ts> 
    < unspecified > new_(id_type const &, Ts &&...);
  template<typename Component, typename... Ts> 
    < unspecified > new_(id_type const &, std::size_t, Ts &&...);
  template<typename Component, typename DistPolicy, typename... Ts> 
    < unspecified > new_(DistPolicy const &, Ts &&...);
  template<typename Component, typename DistPolicy, typename... Ts> 
    < unspecified > new_(DistPolicy const &, std::size_t, Ts &&...);
}

Function template new_

hpx::new_ — Create one or more new instances of the given Component type on the specified locality.

Synopsis

// In header: <hpx/runtime/components/new.hpp>


template<typename Component, typename... Ts> 
  < unspecified > new_(id_type const & locality, Ts &&... vs);

Description

This function creates one or more new instances of the given Component type on the specified locality and returns a future object for the global address which can be used to reference the new component instance.

[Note]Note

This function requires to specify an explicit template argument which will define what type of component(s) to create, for instance:

hpx::future<hpx::id_type> f =
   hpx::new_<some_component>(hpx::find_here(), ...);
hpx::id_type id = f.get();

Parameters:

locality

[in] The global address of the locality where the new instance should be created on.

vs

[in] Any number of arbitrary arguments (passed by value, by const reference or by rvalue reference) which will be forwarded to the constructor of the created component instance.

Returns:

The function returns different types depending on its use:

  • If the explicit template argument Component represents a component type (traits::is_component<Component>::value evaluates to true), the function will return an hpx::future object instance which can be used to retrieve the global address of the newly created component.

  • If the explicit template argument Component represents a client side object (traits::is_client<Component>::value evaluates to true), the function will return a new instance of that type which can be used to refer to the newly created component instance.


Function template new_

hpx::new_ — Create multiple new instances of the given Component type on the specified locality.

Synopsis

// In header: <hpx/runtime/components/new.hpp>


template<typename Component, typename... Ts> 
  < unspecified > 
  new_(id_type const & locality, std::size_t count, Ts &&... vs);

Description

This function creates multiple new instances of the given Component type on the specified locality and returns a future object for the global address which can be used to reference the new component instance.

[Note]Note

This function requires to specify an explicit template argument which will define what type of component(s) to create, for instance:

hpx::future<std::vector<hpx::id_type> > f =
   hpx::new_<some_component[]>(hpx::find_here(), 10, ...);
hpx::id_type id = f.get();

Parameters:

count

[in] The number of component instances to create

locality

[in] The global address of the locality where the new instance should be created on.

vs

[in] Any number of arbitrary arguments (passed by value, by const reference or by rvalue reference) which will be forwarded to the constructor of the created component instance.

Returns:

The function returns different types depending on its use:

  • If the explicit template argument Component represents an array of a component type (i.e. Component[], where traits::is_component<Component>::value evaluates to true), the function will return an hpx::future object instance which holds a std::vector<hpx::id_type>, where eahc of the items in this vector is a global address of one of the newly created components.

  • If the explicit template argument Component represents an array of a client side object type (i.e. Component[], where traits::is_client<Component>::value evaluates to true), the function will return an hpx::future object instance which holds a std::vector<hpx::id_type>, where eahc of the items in this vector is a client side instance of the given type, each representing one of the newly created components.


Function template new_

hpx::new_ — Create one or more new instances of the given Component type based on the given distribution policy.

Synopsis

// In header: <hpx/runtime/components/new.hpp>


template<typename Component, typename DistPolicy, typename... Ts> 
  < unspecified > new_(DistPolicy const & policy, Ts &&... vs);

Description

This function creates one or more new instances of the given Component type on the localities defined by the given distribution policy and returns a future object for global address which can be used to reference the new component instance(s).

[Note]Note

This function requires to specify an explicit template argument which will define what type of component(s) to create, for instance:

hpx::future<hpx::id_type> f =
   hpx::new_<some_component>(hpx::default_layout, ...);
hpx::id_type id = f.get();

Parameters:

policy

[in] The distribution policy used to decide where to place the newly created.

vs

[in] Any number of arbitrary arguments (passed by value, by const reference or by rvalue reference) which will be forwarded to the constructor of the created component instance.

Returns:

The function returns different types depending on its use:

  • If the explicit template argument Component represents a component type (traits::is_component<Component>::value evaluates to true), the function will return an hpx::future object instance which can be used to retrieve the global address of the newly created component.

  • If the explicit template argument Component represents a client side object (traits::is_client<Component>::value evaluates to true), the function will return a new instance of that type which can be used to refer to the newly created component instance.


Function template new_

hpx::new_ — Create multiple new instances of the given Component type on the localities as defined by the given distribution policy.

Synopsis

// In header: <hpx/runtime/components/new.hpp>


template<typename Component, typename DistPolicy, typename... Ts> 
  < unspecified > 
  new_(DistPolicy const & policy, std::size_t count, Ts &&... vs);

Description

This function creates multiple new instances of the given Component type on the localities defined by the given distribution policy and returns a future object for the global address which can be used to reference the new component instance.

[Note]Note

This function requires to specify an explicit template argument which will define what type of component(s) to create, for instance:

hpx::future<std::vector<hpx::id_type> > f =
   hpx::new_<some_component[]>(hpx::default_layout, 10, ...);
hpx::id_type id = f.get();

Parameters:

count

[in] The number of component instances to create

policy

[in] The distribution policy used to decide where to place the newly created.

vs

[in] Any number of arbitrary arguments (passed by value, by const reference or by rvalue reference) which will be forwarded to the constructor of the created component instance.

Returns:

The function returns different types depending on its use:

  • If the explicit template argument Component represents an array of a component type (i.e. Component[], where traits::is_component<Component>::value evaluates to true), the function will return an hpx::future object instance which holds a std::vector<hpx::id_type>, where eahc of the items in this vector is a global address of one of the newly created components.

  • If the explicit template argument Component represents an array of a client side object type (i.e. Component[], where traits::is_client<Component>::value evaluates to true), the function will return an hpx::future object instance which holds a std::vector<hpx::id_type>, where eahc of the items in this vector is a client side instance of the given type, each representing one of the newly created components.

namespace hpx {

  enum logging_destination { destination_hpx =  0, destination_timing =  1, 
                             destination_agas =  2, destination_parcel =  3, 
                             destination_app =  4, destination_debuglog =  5 };
  components::server::runtime_support * get_runtime_support_ptr();
  namespace components {
    void console_logging(logging_destination dest, std::size_t level, 
                         std::string const & msg);
    void cleanup_logging();
    void activate_logging();
    namespace server {
    }
    namespace stubs {
    }
  }
}namespace components {
}
namespace hpx {
  naming::id_type find_here(error_code & = throws);
}

Function find_here

hpx::find_here — Return the global id representing this locality.

Synopsis

// In header: <hpx/runtime/find_here.hpp>


naming::id_type find_here(error_code & ec = throws);

Description

The function find_here() can be used to retrieve the global id usable to refer to the current locality.

[Note]Note

Generally, the id of a locality can be used for instance to create new instances of components and to invoke plain actions (global functions).

[Note]Note

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

This function will return meaningful results only if called from an HPX-thread. It will return hpx::naming::invalid_id otherwise.

See Also:

hpx::find_all_localities(), hpx::find_locality()

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

Returns:

The global id representing the locality this function has been called on.

namespace hpx {
  boost::uint32_t get_locality_id(error_code & = throws);
}

Function get_locality_id

hpx::get_locality_id — Return the number of the locality this function is being called from.

Synopsis

// In header: <hpx/runtime/get_locality_id.hpp>


boost::uint32_t get_locality_id(error_code & ec = throws);

Description

This function returns the id of the current locality.

[Note]Note

The returned value is zero based and its maximum value is smaller than the overall number of localities the current application is running on (as returned by get_num_localities()).

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

This function needs to be executed on a HPX-thread. It will fail otherwise (it will return -1).

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

namespace hpx {
  std::string get_locality_name();
  future< std::string > get_locality_name(naming::id_type const &);
}

Function get_locality_name

hpx::get_locality_name — Return the name of the locality this function is called on.

Synopsis

// In header: <hpx/runtime/get_locality_name.hpp>


std::string get_locality_name();

Description

This function returns the name for the locality on which this function is called.

See Also:

future<std::string> get_locality_name(naming::id_type const& id)

Returns:

This function returns the name for the locality on which the function is called. The name is retrieved from the underlying networking layer and may be different for different parcelports.


Function get_locality_name

hpx::get_locality_name — Return the name of the referenced locality.

Synopsis

// In header: <hpx/runtime/get_locality_name.hpp>


future< std::string > get_locality_name(naming::id_type const & id);

Description

This function returns a future referring to the name for the locality of the given id.

See Also:

std::string get_locality_name()

Parameters:

id

[in] The global id of the locality for which the name should be retrieved

Returns:

This function returns the name for the locality of the given id. The name is retrieved from the underlying networking layer and may be different for different parcel ports.

namespace hpx {

  // Return the number of OS-threads running in the runtime instance the current HPX-thread is associated with. 
  std::size_t get_os_thread_count();
  std::size_t get_os_thread_count(threads::executor const &);
}

Function get_os_thread_count

hpx::get_os_thread_count — Return the number of worker OS- threads used by the given executor to execute HPX threads.

Synopsis

// In header: <hpx/runtime/get_os_thread_count.hpp>


std::size_t get_os_thread_count(threads::executor const & exec);

Description

This function returns the number of cores used to execute HPX threads for the given executor. If the function is called while no HPX runtime system is active, it will return zero. If the executor is not valid, this function will fall back to retrieving the number of OS threads used by HPX.

Parameters:

exec

[in] The executor to be used.

namespace hpx {
  template<typename Component> 
    hpx::future< boost::shared_ptr< Component > > 
    get_ptr(naming::id_type const &);
  template<typename Component> 
    boost::shared_ptr< Component > 
    get_ptr_sync(naming::id_type const &, error_code & = throws);
}

Function template get_ptr

hpx::get_ptr — Returns a future referring to a the pointer to the underlying memory of a component.

Synopsis

// In header: <hpx/runtime/get_ptr.hpp>


template<typename Component> 
  hpx::future< boost::shared_ptr< Component > > 
  get_ptr(naming::id_type const & id);

Description

The function hpx::get_ptr can be used to extract a future referring to the pointer to the underlying memory of a given component.

[Note]Note

This function will successfully return the requested result only if the given component is currently located on the the calling locality. Otherwise the function will raise an error.

Parameters:

id

[in] The global id of the component for which the pointer to the underlying memory should be retrieved.

Returns:

This function returns a future representing the pointer to the underlying memory for the component instance with the given id.


Function template get_ptr_sync

hpx::get_ptr_sync — Returns the pointer to the underlying memory of a component.

Synopsis

// In header: <hpx/runtime/get_ptr.hpp>


template<typename Component> 
  boost::shared_ptr< Component > 
  get_ptr_sync(naming::id_type const & id, error_code & ec = throws);

Description

The function hpx::get_ptr_sync can be used to extract the pointer to the underlying memory of a given component.

[Note]Note

This function will successfully return the requested result only if the given component is currently located on the the requesting locality. Otherwise the function will raise and error.

As long as ec is not pre-initialized to hpx::throws this function doesn't throw but returns the result code using the parameter ec. Otherwise it throws an instance of hpx::exception.

Parameters:

ec

[in,out] this represents the error status on exit, if this is pre-initialized to hpx::throws the function will throw on error instead.

id

[in] The global id of the component for which the pointer to the underlying memory should be retrieved.

Returns:

This function returns the pointer to the underlying memory for the component instance with the given id.

namespace hpx {
  std::size_t get_worker_thread_num();
}

Function get_worker_thread_num

hpx::get_worker_thread_num — Return the number of the current OS-thread running in the runtime instance the current HPX-thread is executed with.

Synopsis

// In header: <hpx/runtime/get_worker_thread_num.hpp>


std::size_t get_worker_thread_num();

Description

This function returns the zero based index of the OS-thread which executes the current HPX-thread.

[Note]Note

The returned value is zero based and its maximum value is smaller than the overall number of OS-threads executed (as returned by get_os_thread_count().

This function needs to be executed on a HPX-thread. It will fail otherwise (it will return -1).

namespace hpx {
  enum launch;
}

Type launch

hpx::launch

Synopsis

// In header: <hpx/runtime/launch_policy.hpp>


enum launch { async =  0x01, deferred =  0x02, task =  0x04, sync =  0x08, 
              fork =  0x10, sync_policies =  0x0a, async_policies =  0x15, 
              all =  0x1f };

Description

Launch policy for hpx::async

namespace hpx {
  namespace naming {
    id_type unmanaged(id_type const &);
  }
}

Function unmanaged

hpx::naming::unmanaged

Synopsis

// In header: <hpx/runtime/naming/unmanaged.hpp>


id_type unmanaged(id_type const & id);

Description

The helper function hpx::unmanaged can be used to generate a global identifier which does not participate in the automatic garbage collection.

[Note]Note

This function allows to apply certain optimizations to the process of memory management in HPX. It however requires the user to take full responsibility for keeping the referenced objects alive long enough.

Parameters:

id

[in] The id to generated the unmanaged global id from This parameter can be itself a managed or a unmanaged global id.

Returns:

This function returns a new global id referencing the same object as the parameter id. The only difference is that the returned global identifier does not participate in the automatic garbage collection.

namespace hpx {
  namespace naming {
    typedef agas::addressing_service resolver_client;
    typedef boost::uint64_t address_type;

    HPX_CONSTEXPR_OR_CONST boost::uint32_t invalid_locality_id;
    resolver_client & get_agas_client();
  }
}namespace naming {
}

Global invalid_locality_id

hpx::naming::invalid_locality_id

Synopsis

// In header: <hpx/runtime/naming_fwd.hpp>

HPX_CONSTEXPR_OR_CONST boost::uint32_t invalid_locality_id;
namespace hpx {
  namespace parcelset {
    typedef util::function_nonser< void(boost::system::error_code const &, parcel const &) > write_handler_type;
    policies::message_handler * 
    get_message_handler(parcelhandler * ph, char const * name, 
                        char const * type, std::size_t num, 
                        std::size_t interval, locality const & l, 
                        error_code & ec = throws);
    bool do_background_work(std::size_t num_thread = 0);
    namespace policies {
    }
  }
}namespace parcelset {
}
namespace hpx {
  enum runtime_mode;
  char const * get_runtime_mode_name(runtime_mode);
  runtime_mode get_runtime_mode_from_name(std::string const & mode);
}

Type runtime_mode

hpx::runtime_mode

Synopsis

// In header: <hpx/runtime/runtime_mode.hpp>


enum runtime_mode { runtime_mode_invalid =  -1, runtime_mode_console =  0, 
                    runtime_mode_worker =  1, runtime_mode_connect =  2, 
                    runtime_mode_default =  3, runtime_mode_last };

Description

A HPX runtime can be executed in two different modes: console mode and worker mode.

runtime_mode_console
The runtime is the console locality.
runtime_mode_worker
The runtime is a worker locality.
runtime_mode_connect

The runtime is a worker locality connecting late

runtime_mode_default

The runtime mode will be determined based on the command line arguments


Function get_runtime_mode_name

hpx::get_runtime_mode_name

Synopsis

// In header: <hpx/runtime/runtime_mode.hpp>


char const * get_runtime_mode_name(runtime_mode state);

Description

Get the readable string representing the name of the given runtime_mode constant.

namespace hpx {
  typedef util::function_nonser< void(boost::system::error_code const &, parcelset::parcel const &) > parcel_write_handler_type;
  parcel_write_handler_type 
  set_parcel_write_handler(parcel_write_handler_type const &);
}

Type definition parcel_write_handler_type

parcel_write_handler_type

Synopsis

// In header: <hpx/runtime/set_parcel_write_handler.hpp>


typedef util::function_nonser< void(boost::system::error_code const &, parcelset::parcel const &) > parcel_write_handler_type;

Description

The type of a function which can be registered as a parcel write handler using the function hpx::set_parcel_write_handler.

[Note]Note

A parcel write handler is a function which is called by the parcel layer whenever a parcel has been sent by the underlying networking library and if no explicit parcel handler function was specified for the parcel.


Function set_parcel_write_handler

hpx::set_parcel_write_handler

Synopsis

// In header: <hpx/runtime/set_parcel_write_handler.hpp>


parcel_write_handler_type 
set_parcel_write_handler(parcel_write_handler_type const & f);

Description

Set the default parcel write handler which is invoked once a parcel has been sent if no explicit write handler was specified.

[Note]Note

If no parcel handler function is registered by the user the system will call a default parcel handler function which is not performing any actions. However, this default function will terminate the application in case of any errors detected during preparing or sending the parcel.

Parameters:

f

The new parcel write handler to use from this point on

Returns:

The function returns the parcel write handler which was installed before this function was called.

namespace hpx {
  namespace threads {
    thread_self & get_self();
    thread_self * get_self_ptr();
    thread_self_impl_type * get_ctx_ptr();
    thread_self * get_self_ptr_checked(error_code & = throws);
    thread_id_type get_self_id();
    thread_id_repr_type get_parent_id();
    std::size_t get_parent_phase();
    boost::uint32_t get_parent_locality_id();
    boost::uint64_t get_self_component_id();
    threadmanager_base & get_thread_manager();
    boost::int64_t get_thread_count(thread_state_enum = unknown);
    boost::int64_t 
    get_thread_count(thread_priority priority, 
                     thread_state_enum state = unknown);
  }
  namespace util {
    namespace coroutines {
    }
  }
}

Function get_self

hpx::threads::get_self

Synopsis

// In header: <hpx/runtime/threads/thread_data_fwd.hpp>


thread_self & get_self();

Description

The function get_self returns a reference to the (OS thread specific) self reference to the current HPX thread.


Function get_self_ptr

hpx::threads::get_self_ptr

Synopsis

// In header: <hpx/runtime/threads/thread_data_fwd.hpp>


thread_self * get_self_ptr();

Description

The function get_self_ptr returns a pointer to the (OS thread specific) self reference to the current HPX thread.


Function get_ctx_ptr

hpx::threads::get_ctx_ptr

Synopsis

// In header: <hpx/runtime/threads/thread_data_fwd.hpp>


thread_self_impl_type * get_ctx_ptr();

Description

The function get_ctx_ptr returns a pointer to the internal data associated with each coroutine.


Function get_self_ptr_checked

hpx::threads::get_self_ptr_checked

Synopsis

// In header: <hpx/runtime/threads/thread_data_fwd.hpp>


thread_self * get_self_ptr_checked(error_code & ec = throws);

Description

The function get_self_ptr_checked returns a pointer to the (OS thread specific) self reference to the current HPX thread.


Function get_self_id

hpx::threads::get_self_id

Synopsis

// In header: <hpx/runtime/threads/thread_data_fwd.hpp>


thread_id_type get_self_id();

Description

The function get_self_id returns the HPX thread id of the current thread (or zero if the current thread is not a HPX thread).


Function get_parent_id

hpx::threads::get_parent_id

Synopsis

// In header: <hpx/runtime/threads/thread_data_fwd.hpp>


thread_id_repr_type get_parent_id();

Description

The function get_parent_id returns the HPX thread id of the current thread's parent (or zero if the current thread is not a HPX thread).

[Note]Note

This function will return a meaningful value only if the code was compiled with HPX_HAVE_THREAD_PARENT_REFERENCE being defined.


Function get_parent_phase

hpx::threads::get_parent_phase

Synopsis

// In header: <hpx/runtime/threads/thread_data_fwd.hpp>


std::size_t get_parent_phase();

Description

The function get_parent_phase returns the HPX phase of the current thread's parent (or zero if the current thread is not a HPX thread).

[Note]Note

This function will return a meaningful value only if the code was compiled with HPX_HAVE_THREAD_PARENT_REFERENCE being defined.


Function get_parent_locality_id

hpx::threads::get_parent_locality_id

Synopsis

// In header: <hpx/runtime/threads/thread_data_fwd.hpp>


boost::uint32_t get_parent_locality_id();

Description

The function get_parent_locality_id returns the id of the locality of the current thread's parent (or zero if the current thread is not a HPX thread).

[Note]Note

This function will return a meaningful value only if the code was compiled with HPX_HAVE_THREAD_PARENT_REFERENCE being defined.


Function get_self_component_id

hpx::threads::get_self_component_id

Synopsis

// In header: <hpx/runtime/threads/thread_data_fwd.hpp>


boost::uint64_t get_self_component_id();

Description

The function get_self_component_id returns the lva of the component the current thread is acting on

[Note]Note

This function will return a meaningful value only if the code was compiled with HPX_HAVE_THREAD_TARGET_ADDRESS being defined.


Function get_thread_manager

hpx::threads::get_thread_manager

Synopsis

// In header: <hpx/runtime/threads/thread_data_fwd.hpp>


threadmanager_base & get_thread_manager();

Description

The function get_thread_manager returns a reference to the current thread manager.


Function get_thread_count

hpx::threads::get_thread_count

Synopsis

// In header: <hpx/runtime/threads/thread_data_fwd.hpp>


boost::int64_t get_thread_count(thread_state_enum state = unknown);

Description

The function get_thread_count returns the number of currently known threads.

[Note]Note

If state == unknown this function will not only return the number of currently existing threads, but will add the number of registered task descriptions (which have not been converted into threads yet).

namespace hpx {
  namespace threads {
    enum thread_state_enum;
    enum thread_priority;
    enum thread_state_ex_enum;

    enum thread_stacksize { thread_stacksize_unknown =  -1, 
                            thread_stacksize_small =  1, 
                            thread_stacksize_medium =  2, 
                            thread_stacksize_large =  3, 
                            thread_stacksize_huge =  4, 
                            thread_stacksize_default =  thread_stacksize_small, 
                            thread_stacksize_minimal =  thread_stacksize_small, 
                            thread_stacksize_maximal =  thread_stacksize_huge };

    typedef unspecified thread_state;
    typedef unspecified thread_state_ex;
    char const * get_thread_state_name(thread_state_enum state);
    char const * get_thread_priority_name(thread_priority priority);
    char const * get_stack_size_name(std::ptrdiff_t size);
  }
}

Type thread_state_enum

hpx::threads::thread_state_enum

Synopsis

// In header: <hpx/runtime/threads/thread_enums.hpp>


enum thread_state_enum { unknown =  0, active =  1, pending =  2, 
                         suspended =  3, depleted =  4, terminated =  5, 
                         staged =  6 };

Description

The thread_state_enum enumerator encodes the current state of a thread instance

active

thread is currently active (running, has resources)

pending

thread is pending (ready to run, but no hardware resource available)

suspended

thread has been suspended (waiting for synchronization event, but still known and under control of the thread-manager)

depleted

thread has been depleted (deeply suspended, it is not known to the thread-manager)

terminated

thread has been stopped an may be garbage collected

staged

this is not a real thread state, but allows to reference staged task descriptions, which eventually will be converted into thread objects


Type thread_priority

hpx::threads::thread_priority

Synopsis

// In header: <hpx/runtime/threads/thread_enums.hpp>



enum thread_priority { thread_priority_unknown =  -1, 
                       thread_priority_default =  0, thread_priority_low =  1, 
                       thread_priority_normal =  2, 
                       thread_priority_critical =  3, 
                       thread_priority_boost =  4 };

Description

\ cond NODETAIL Please note that if you change the value of threads::terminated above, you will need to adjust do_call(dummy<1> = 1) in util/coroutine/detail/coroutine_impl.hpp as well. \ endcond

thread_priority_default
use default priority
thread_priority_low
low thread priority
thread_priority_normal
normal thread priority (default)
thread_priority_critical
high thread priority
thread_priority_boost

high thread priority for first invocation, normal afterwards


Type thread_state_ex_enum

hpx::threads::thread_state_ex_enum

Synopsis

// In header: <hpx/runtime/threads/thread_enums.hpp>



enum thread_state_ex_enum { wait_unknown =  -1, wait_signaled =  0, 
                            wait_timeout =  1, wait_terminate =  2, 
                            wait_abort =  3 };

Description

The thread_state_ex_enum enumerator encodes the reason why a thread is being restarted

wait_signaled
The thread has been signaled.
wait_timeout
The thread has been reactivated after a timeout.
wait_terminate
The thread needs to be terminated.
wait_abort
The thread needs to be aborted.
namespace hpx {
  namespace threads {
    namespace policies {
      typedef local_priority_queue_scheduler< boost::mutex, lockfree_fifo, lockfree_fifo, lockfree_lifo > fifo_priority_queue_scheduler;
      typedef fifo_priority_queue_scheduler queue_scheduler;
    }
  }
}namespace threads {
}
namespace hpx {
  void trigger_lco_event(naming::id_type const &, naming::address &&, 
                         bool = true);
  void trigger_lco_event(naming::id_type const &, bool = true);
  void trigger_lco_event(naming::id_type const &, naming::address &&, 
                         naming::id_type const &, bool = true);
  void trigger_lco_event(naming::id_type const &, naming::id_type const &, 
                         bool = true);
  template<typename T> 
    void set_lco_value(naming::id_type const &, naming::address &&, T &&, 
                       bool = true);
  template<typename T> 
    std::enable_if< !std::is_same< typename util::decay< T >::type, naming::address >::value >::type 
    set_lco_value(naming::id_type const &, T &&, bool = true);
  template<typename T> 
    void set_lco_value(naming::id_type const &, naming::address &&, T &&, 
                       naming::id_type const &, bool = true);
  template<typename T> 
    void set_lco_value(naming::id_type const &, T &&, naming::id_type const &, 
                       bool = true);
  void set_lco_error(naming::id_type const &, naming::address &&, 
                     boost::exception_ptr const &, bool = true);
  void set_lco_error(naming::id_type const &, naming::address &&, 
                     boost::exception_ptr &&, bool = true);
  void set_lco_error(naming::id_type const &, boost::exception_ptr const &, 
                     bool = true);
  void set_lco_error(naming::id_type const &, boost::exception_ptr &&, 
                     bool = true);
  void set_lco_error(naming::id_type const &, naming::address &&, 
                     boost::exception_ptr const &, naming::id_type const &, 
                     bool = true);
  void set_lco_error(naming::id_type const &, naming::address &&, 
                     boost::exception_ptr &&, naming::id_type const &, 
                     bool = true);
  void set_lco_error(naming::id_type const &, boost::exception_ptr const &, 
                     naming::id_type const &, bool = true);
  void set_lco_error(naming::id_type const &, boost::exception_ptr &&, 
                     naming::id_type const &, bool = true);
}

Function trigger_lco_event

hpx::trigger_lco_event — Trigger the LCO referenced by the given id.

Synopsis

// In header: <hpx/runtime/trigger_lco.hpp>


void trigger_lco_event(naming::id_type const & id, naming::address && addr, 
                       bool move_credits = true);

Description

Parameters:

addr

[in] This represents the addr of the LCO which should be triggered.

id

[in] This represents the id of the LCO which should be triggered.

move_credits

[in] If this is set to true then it is ok to send all credits in id along with the generated message. The default value is true.


Function trigger_lco_event

hpx::trigger_lco_event — Trigger the LCO referenced by the given id.

Synopsis

// In header: <hpx/runtime/trigger_lco.hpp>


void trigger_lco_event(naming::id_type const & id, bool move_credits = true);

Description

Parameters:

id

[in] This represents the id of the LCO which should be triggered.

move_credits

[in] If this is set to true then it is ok to send all credits in id along with the generated message. The default value is true.


Function trigger_lco_event

hpx::trigger_lco_event — Trigger the LCO referenced by the given id.

Synopsis

// In header: <hpx/runtime/trigger_lco.hpp>


void trigger_lco_event(naming::id_type const & id, naming::address && addr, 
                       naming::id_type const & cont, 
                       bool move_credits = true);

Description

Parameters:

addr

[in] This represents the addr of the LCO which should be triggered.

cont

[in] This represents the LCO to trigger after completion.

id

[in] This represents the id of the LCO which should be triggered.

move_credits

[in] If this is set to true then it is ok to send all credits in id along with the generated message. The default value is true.


Function trigger_lco_event

hpx::trigger_lco_event — Trigger the LCO referenced by the given id.

Synopsis

// In header: <hpx/runtime/trigger_lco.hpp>


void trigger_lco_event(naming::id_type const & id, 
                       naming::id_type const & cont, 
                       bool move_credits = true);

Description

Parameters:

cont

[in] This represents the LCO to trigger after completion.

id

[in] This represents the id of the LCO which should be triggered.

move_credits

[in] If this is set to true then it is ok to send all credits in id along with the generated message. The default value is true.


Function template set_lco_value

hpx::set_lco_value — Set the result value for the LCO referenced by the given id.

Synopsis

// In header: <hpx/runtime/trigger_lco.hpp>


template<typename T> 
  void set_lco_value(naming::id_type const & id, naming::address && addr, 
                     T && t, bool move_credits = true);

Description

Parameters:

addr

[in] This represents the addr of the LCO which should be triggered.

id

[in] This represents the id of the LCO which should receive the given value.

move_credits

[in] If this is set to true then it is ok to send all credits in id along with the generated message. The default value is true.

t

[in] This is the value which should be sent to the LCO.


Function template set_lco_value

hpx::set_lco_value — Set the result value for the LCO referenced by the given id.

Synopsis

// In header: <hpx/runtime/trigger_lco.hpp>


template<typename T> 
  std::enable_if< !std::is_same< typename util::decay< T >::type, naming::address >::value >::type 
  set_lco_value(naming::id_type const & id, T && t, bool move_credits = true);

Description

Parameters:

id

[in] This represents the id of the LCO which should receive the given value.

move_credits

[in] If this is set to true then it is ok to send all credits in id along with the generated message. The default value is true.

t

[in] This is the value which should be sent to the LCO.


Function template set_lco_value

hpx::set_lco_value — Set the result value for the LCO referenced by the given id.

Synopsis

// In header: <hpx/runtime/trigger_lco.hpp>


template<typename T> 
  void set_lco_value(naming::id_type const & id, naming::address && addr, 
                     T && t, naming::id_type const & cont, 
                     bool move_credits = true);

Description

Parameters:

addr

[in] This represents the addr of the LCO which should be triggered.

cont

[in] This represents the LCO to trigger after completion.

id

[in] This represents the id of the LCO which should receive the given value.

move_credits

[in] If this is set to true then it is ok to send all credits in id along with the generated message. The default value is true.

t

[in] This is the value which should be sent to the LCO.


Function template set_lco_value

hpx::set_lco_value — Set the result value for the LCO referenced by the given id.

Synopsis

// In header: <hpx/runtime/trigger_lco.hpp>


template<typename T> 
  void set_lco_value(naming::id_type const & id, T && t, 
                     naming::id_type const & cont, bool move_credits = true);

Description

Parameters:

cont

[in] This represents the LCO to trigger after completion.

id

[in] This represents the id of the LCO which should receive the given value.

move_credits

[in] If this is set to true then it is ok to send all credits in id along with the generated message. The default value is true.

t

[in] This is the value which should be sent to the LCO.


Function set_lco_error

hpx::set_lco_error — Set the error state for the LCO referenced by the given id.

Synopsis

// In header: <hpx/runtime/trigger_lco.hpp>


void set_lco_error(naming::id_type const & id, naming::address && addr, 
                   boost::exception_ptr const & e, bool move_credits = true);

Description

Parameters:

addr

[in] This represents the addr of the LCO which should be triggered.

e

[in] This is the error value which should be sent to the LCO.

id

[in] This represents the id of the LCO which should receive the error value.

move_credits

[in] If this is set to true then it is ok to send all credits in id along with the generated message. The default value is true.


Function set_lco_error

hpx::set_lco_error — Set the error state for the LCO referenced by the given id. (naming::id_type const& id,

Synopsis

// In header: <hpx/runtime/trigger_lco.hpp>


void set_lco_error(naming::id_type const & id, naming::address && addr, 
                   boost::exception_ptr && e, bool move_credits = true);

Description

(naming::id_type const& id, naming::address && addr, boost::exception_ptr const& e, bool move_credits)

Parameters:

addr

[in] This represents the addr of the LCO which should be triggered.

e

[in] This is the error value which should be sent to the LCO.

id

[in] This represents the id of the LCO which should receive the error value.

move_credits

[in] If this is set to true then it is ok to send all credits in id along with the generated message. The default value is true.


Function set_lco_error

hpx::set_lco_error — Set the error state for the LCO referenced by the given id.

Synopsis

// In header: <hpx/runtime/trigger_lco.hpp>


void set_lco_error(naming::id_type const & id, boost::exception_ptr const & e, 
                   bool move_credits = true);

Description

Parameters:

e

[in] This is the error value which should be sent to the LCO.

id

[in] This represents the id of the LCO which should receive the error value.

move_credits

[in] If this is set to true then it is ok to send all credits in id along with the generated message. The default value is true.


Function set_lco_error

hpx::set_lco_error — Set the error state for the LCO referenced by the given id. (naming::id_type const& id,

Synopsis

// In header: <hpx/runtime/trigger_lco.hpp>


void set_lco_error(naming::id_type const & id, boost::exception_ptr && e, 
                   bool move_credits = true);

Description

(naming::id_type const& id, boost::exception_ptr const& e, bool move_credits)

Parameters:

e

[in] This is the error value which should be sent to the LCO.

id

[in] This represents the id of the LCO which should receive the error value.

move_credits

[in] If this is set to true then it is ok to send all credits in id along with the generated message. The default value is true.


Function set_lco_error

hpx::set_lco_error — Set the error state for the LCO referenced by the given id.

Synopsis

// In header: <hpx/runtime/trigger_lco.hpp>


void set_lco_error(naming::id_type const & id, naming::address && addr, 
                   boost::exception_ptr const & e, 
                   naming::id_type const & cont, bool move_credits = true);

Description

Parameters:

addr

[in] This represents the addr of the LCO which should be triggered.

cont

[in] This represents the LCO to trigger after completion.

e

[in] This is the error value which should be sent to the LCO.

id

[in] This represents the id of the LCO which should receive the error value.

move_credits

[in] If this is set to true then it is ok to send all credits in id along with the generated message. The default value is true.


Function set_lco_error

hpx::set_lco_error — Set the error state for the LCO referenced by the given id. (naming::id_type const& id,

Synopsis

// In header: <hpx/runtime/trigger_lco.hpp>


void set_lco_error(naming::id_type const & id, naming::address && addr, 
                   boost::exception_ptr && e, naming::id_type const & cont, 
                   bool move_credits = true);

Description

(naming::id_type const& id, naming::address && addr boost::exception_ptr const& e, naming::id_type const& cont, bool move_credits)

Parameters:

addr

[in] This represents the addr of the LCO which should be triggered.

e

[in] This is the error value which should be sent to the LCO.

id

[in] This represents the id of the LCO which should receive the error value.

move_credits

[in] If this is set to true then it is ok to send all credits in id along with the generated message. The default value is true.


Function set_lco_error

hpx::set_lco_error — Set the error state for the LCO referenced by the given id.

Synopsis

// In header: <hpx/runtime/trigger_lco.hpp>


void set_lco_error(naming::id_type const & id, boost::exception_ptr const & e, 
                   naming::id_type const & cont, bool move_credits = true);

Description

Parameters:

cont

[in] This represents the LCO to trigger after completion.

e

[in] This is the error value which should be sent to the LCO.

id

[in] This represents the id of the LCO which should receive the error value.

move_credits

[in] If this is set to true then it is ok to send all credits in id along with the generated message. The default value is true.


Function set_lco_error

hpx::set_lco_error — Set the error state for the LCO referenced by the given id. (naming::id_type const& id,

Synopsis

// In header: <hpx/runtime/trigger_lco.hpp>


void set_lco_error(naming::id_type const & id, boost::exception_ptr && e, 
                   naming::id_type const & cont, bool move_credits = true);

Description

(naming::id_type const& id, boost::exception_ptr const& e, naming::id_type const& cont, bool move_credits)

Parameters:

e

[in] This is the error value which should be sent to the LCO.

id

[in] This represents the id of the LCO which should receive the error value.

move_credits

[in] If this is set to true then it is ok to send all credits in id along with the generated message. The default value is true.

namespace hpx {
  runtime & get_runtime();
  runtime * get_runtime_ptr();

  // The function get_locality returns a reference to the locality prefix. 
  naming::gid_type const & get_locality();
  std::size_t get_runtime_instance_number();
  void report_error(std::size_t num_thread, boost::exception_ptr const & e);
  void report_error(boost::exception_ptr const & e);

  // Register a function to be called during system shutdown. 
  bool register_on_exit(util::function_nonser< void()> const &);
}

Function get_runtime

hpx::get_runtime

Synopsis

// In header: <hpx/runtime_fwd.hpp>


runtime & get_runtime();

Description

The function get_runtime returns a reference to the (thread specific) runtime instance.


Function get_runtime_instance_number

hpx::get_runtime_instance_number

Synopsis

// In header: <hpx/runtime_fwd.hpp>


std::size_t get_runtime_instance_number();

Description

The function get_runtime_instance_number returns a unique number associated with the runtime instance the current thread is running in.

This section gives definitions for some of the terms used throughout the HPX documentation and source code.

Locality

A locality in HPX describes a synchronous domain of execution, or the domain of bounded upper response time. This normally is just a single node in a cluster or a NUMA domain in a SMP machine.

Active Global Address Space (AGAS)

HPX incorporates a global address space. Any executing thread can access any object within the domain of the parallel application with the caveat that it must have appropriate access privileges. The model does not assume that global addresses are cache coherent; all loads and stores will deal directly with the site of the target object. All global addresses within a Synchronous Domain are assumed to be cache coherent for those processor cores that incorporate transparent caches. The Active Global Address Space used by HPX differs from research PGAS models. Partitioned Global Address Space is passive in their means of address translation. Copy semantics, distributed compound operations, and affinity relationships are some of the global functionality supported by AGAS.

Process

The concept of the "process" in HPX is extended beyond that of either sequential execution or communicating sequential processes. While the notion of process suggests action (as do "function" or "subroutine") it has a further responsibility of context, that is, the logical container of program state. It is this aspect of operation that process is employed in HPX. Furthermore, referring to "parallel processes" in HPX designates the presence of parallelism within the context of a given process, as well as the coarse grained parallelism achieved through concurrency of multiple processes of an executing user job. HPX processes provide a hierarchical name space within the framework of the active global address space and support multiple means of internal state access from external sources. It also incorporates capabilities based access rights for protection and security.

Parcel

The Parcel is a component in HPX that communicates data, invokes an action at a distance, and distributes flow-control through the migration of continuations. Parcels bridge the gap of asynchrony between synchronous domains while maintaining symmetry of semantics between local and global execution. Parcels enable message-driven computation and may be seen as a form of "active messages". Other important forms of message-driven computation predating active messages include dataflow tokens, the J-machine's support for remote method instantiation, and at the coarse grained variations of Unix remote procedure calls, among others. This enables work to be moved to the data as well as performing the more common action of bringing data to the work. A parcel can cause actions to occur remotely and asynchronously, among which are the creation of threads at different system nodes or synchronous domains.

Local Control Object (LCO)

A local control object (sometimes called a lightweight control object) is a general term for the synchronization mechanisms used in HPX. Any object implementing a certain concept can be seen as an LCO. This concepts encapsulates the ability to be triggered by one or more events which when taking the object into a predefined state will cause a thread to be executed. This could either create a new thread or resume an existing thread.

The LCO is a family of synchronization functions potentially representing many classes of synchronization constructs, each with many possible variations and multiple instances. The LCO is sufficiently general that it can subsume the functionality of conventional synchronization primitives such as spinlocks, mutexes, semaphores, and global barriers. However due to the rich concept an LCO can represent powerful synchronization and control functionality not widely employed, such as dataflow and futures (among others), which open up enormous opportunities for rich diversity of distributed control and operation.

The STE||AR Group (pronounced as stellar) stands for "Systems Technology, Emergent Parallelism, and Algorithm Research". We are an international group of faculty, researchers, and students working at different organizations. The goal of the STE||AR Group is to promote the development of scalable parallel applications by providing a community for ideas, a framework for collaboration, and a platform for communicating these concepts to the broader community.

All of our work is centered around building technologies for scalable parallel applications. HPX, our general purpose C++ runtime system for parallel and distributed applications, is no exeption. We use HPX for a broad range of scientific applications, helping scientists and developers to write code which scales better and shows better performance compared to more conventional programming models such as MPI.

HPX is based on ParalleX which is a new (and still experimental) parallel execution model aiming to overcome the limitations imposed by the current hardware and the way we write applications today. Our group focuses on two types of applications - those requiring excellent strong scaling, allowing for a dramatic reduction of execution time for fixed workloads and those needing highest level of sustained performance through massive parallelism. These applications are presently unable (through conventional practices) to effectively exploit a relatively small number of cores in a multi-core system. By extention, these application will not be able to exploit high-end computing systems which are likely to employ hundreds of millions of such cores by the end of this decade.

Critical bottlenecks to the effective use of new generation high performance computing (HPC) systems include:

  • Starvation: due to lack of usable application parallelism and means of managing it,
  • Overhead: reduction to permit strong scalability, improve efficiency, and enable dynamic resource management,
  • Latency: from remote access across system or to local memories,
  • Contention: due to multicore chip I/O pins, memory banks, and system interconnects.

The ParalleX model has been devised to address these challenges by enabling a new computing dynamic through the application of message-driven computation in a global address space context with lightweight synchronization. The work on HPX is centered around implementing the concepts as defined by the ParalleX model. HPX is currently targetted at conventional machines, such as classical Linux based Beowulf clusters and SMP nodes.

We fully understand that the success of HPX (and ParalleX) is very much the result of the work of many people. To see a list of who is contributing see our tables below.

HPX Contributors

Table 33. Contributors

Name

Institution

email

Hartmut Kaiser

Center for Computation and Technology (CCT), Louisiana State University (LSU)

Thomas Heller

Department of Computer Science 3 - Computer Architecture, Friedrich-Alexander University Erlangen-Nuremberg (FAU)

Agustin Berge

Center for Computation and Technology (CCT), Louisiana State University (LSU)

Anton Bikineev

Center for Computation and Technology (CCT), Louisiana State University (LSU)

Martin Stumpf

Department of Computer Science 3 - Computer Architecture, Friedrich-Alexander University Erlangen-Nuremberg (FAU)

Bryce Adelstein-Lelbach

Center for Computation and Technology (CCT), Louisiana State University (LSU)

Vinay C Amatya

Center for Computation and Technology (CCT), Louisiana State University (LSU)

Shuangyang Yang

Center for Computation and Technology (CCT), Louisiana State University (LSU)

Jeroen Habraken

Technische Universiteit Eindhoven

Steven Brandt

Center for Computation and Technology (CCT), Louisiana State University (LSU)

Andrew Kemp

Center for Computation and Technology (CCT), Louisiana State University (LSU)

Adrian Serio

Center for Computation and Technology (CCT), Louisiana State University (LSU)

Maciej Brodowicz

Center for Research in Extreme Scale Technologies (CREST), Indiana University (IU)

Matthew Anderson

Center for Research in Extreme Scale Technologies (CREST), Indiana University (IU)

Alex Nagelberg

Center for Computation and Technology (CCT), Louisiana State University (LSU)

Dylan Stark

Sandia National Labs (Albuquerque)


Contributors to this Document


Acknowledgements

Thanks also to the following people who contributed directly or indirectly to the project through discussions, pull requests, documentation patches, etc.

  • Kevin Huck and Nick Chaimov (University of Oregon), who contributed the integration of APEX (Autonomic Performance Environment for eXascale) with HPX.
  • Francisco Jose Tapia, who helped with implementing the parallel sort algorithm for HPX.
  • Patrick Diehl, who worked on implementing CUDA support for our companion library targeting GPGPUs (HPXCL).
  • Eric Lemanissier contributed fixes to allow compilation using the MingW toolchain.
  • Nidhi Makhijani who helped cleaning up some enum consistencies in HPX and contributed to the resource manager used in the thread scheduling subsystem. She also worked on HPX in the context of the Google Summer of Code 2015.
  • Larry Xiao, Devang Bacharwar, Marcin Copik, and Konstantin Kronfeldner who worked on HPX in the context of the Google Summer of Code program 2015.
  • Daniel Bourgeois (Center for Computation and Technology (CCT)) who contributed to HPX the implementation of several parallel algorithms (as proposed by N4313).
  • Anuj Sharma and Christopher Bross (Department of Computer Science 3 - Computer Architecture), who worked on HPX in the context of the Google Summer of Code program 2014.
  • Martin Stumpf (Department of Computer Science 3 - Computer Architecture), who rebuilt our contiguous testing infrastructure (see the HPX Buildbot Website). Martin is also working on HPXCL (mainly all work related to OpenCL) and implementing an HPX backend for POCL, a portable computing language solution based on OpenCL.
  • Grant Mercer (University of Nevada, Las Vegas), who helped creating many of the parallel algorithms (as proposed by N4313).
  • Damond Howard (Louisiana State University (LSU)), who works on HPXCL (mainly all work related to CUDA).
  • Parsa Amini (Center for Computation and Technology (CCT)), who works on the implementation and optimization of AGAS (Active Global Address Space).
  • Christoph Junghans (Los Alamos National Lab), who helped making our buildsystem more portable.
  • Andreas Buhr, who helped with improving our documentation.
  • Antoine Tran Tan (Laboratoire de Recherche en Informatique, Paris), who worked on integrating HPX as a backend for NT2.
  • John Biddiscombe (Swiss National Supercomputing Centre), who helped with the BlueGene/Q port of HPX, implemented the parallel sort algorithm, and made several other contributions.
  • Erik Schnetter (Perimeter Institute for Theoretical Physics), who greatly helped to make HPX more robust by sumbitting a large amount of problem reports, feature requests, and made several direct contributions.
  • Mathias Gaunard (Metascale), who contributed several patches to reduce compile time warnings generated while compiling HPX.
  • Andreas Buhr, who helped fixing some documentation inconsistencies.
  • Patricia Grubel (New Mexico State University), who contributed the desciption of the different HPX thread scheduler policies and is working on the performance analysis of our thread scheduling subsystem.
  • Lars Viklund, who contributed platform specific patches for FreeBSD and MSVC12.
  • Agustin Berge, who contributed patches fixing some very nasty hidden template meta-programming issues. He rewrote large parts of the API elements ensuring strict conformance with C++11/14.
  • Anton Bikineev for contributing changes to make using boost::lexical_cast safer, he also contributed a thread safety fix to the iostreams module. He also contributed a complete rewrite of the serialization infrastructure replacing Boost.Serialization inside HPX.
  • Pyry Jahkola, who contributed the Mac OS build system and build documentation on how to build HPX using Clang and libc++.
  • Mario Mulansky, who created an HPX backend for his Boost.Odeint library, and who submitted several test cases allowing us to reproduce and fix problems in HPX.
  • Rekha Raj, who contributed changes to the description of the Windows build instructions.
  • Alex Nagelberg for his work on implementing a C wrapper API for HPX.

In addition to the people who worked directly with HPX development we would like to acknowledge the NSF, DoE, DARPA, Center for Computation and Technology (CCT), and Department of Computer Science 3 - Computer Architecture who fund and support our work. We would also like to thank the following organizations for granting us allocations of thier compute resources: LSU HPC, LONI, XSEDE and the Gauss Center for Supercomputing.

HPX is currently funded by the following grants:

  • The National Science Foundation through awards 1117470 (APX), 1240655 (STAR), 1447831 (PXFS), and 1339782 (STORM). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
  • The Department of Energy (DoE) through the award DE-SC0008714 (XPRESS). Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.
  • The Bavarian Research Foundation (Bayerische Forschungsstfitung) through the grant AZ-987-11.